Feb 27 18:44:59 crc systemd[1]: Starting Kubernetes Kubelet... Feb 27 18:44:59 crc restorecon[4763]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:44:59 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 18:45:00 crc restorecon[4763]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 18:45:00 crc restorecon[4763]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 27 18:45:01 crc kubenswrapper[4981]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 18:45:01 crc kubenswrapper[4981]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 27 18:45:01 crc kubenswrapper[4981]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 18:45:01 crc kubenswrapper[4981]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 18:45:01 crc kubenswrapper[4981]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 27 18:45:01 crc kubenswrapper[4981]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.320134 4981 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328245 4981 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328274 4981 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328283 4981 feature_gate.go:330] unrecognized feature gate: Example Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328294 4981 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328302 4981 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328311 4981 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328320 4981 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328333 4981 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328344 4981 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328353 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328363 4981 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328373 4981 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328383 4981 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328404 4981 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328413 4981 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328422 4981 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328431 4981 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328439 4981 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328447 4981 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328455 4981 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328464 4981 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328471 4981 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328480 4981 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328489 4981 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328496 4981 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328506 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328514 4981 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328523 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328530 4981 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328538 4981 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328546 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328554 4981 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328563 4981 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328571 4981 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328578 4981 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328586 4981 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328594 4981 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328604 4981 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328614 4981 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328623 4981 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328632 4981 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328641 4981 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328651 4981 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328660 4981 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328668 4981 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328677 4981 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328685 4981 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328694 4981 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328703 4981 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328711 4981 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328719 4981 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328727 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328735 4981 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328743 4981 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328751 4981 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328759 4981 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328768 4981 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328776 4981 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328784 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328796 4981 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328805 4981 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328813 4981 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328821 4981 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328830 4981 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328838 4981 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328846 4981 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328856 4981 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328863 4981 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328871 4981 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328880 4981 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.328888 4981 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329029 4981 flags.go:64] FLAG: --address="0.0.0.0" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329047 4981 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329084 4981 flags.go:64] FLAG: --anonymous-auth="true" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329096 4981 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329108 4981 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329118 4981 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329130 4981 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329141 4981 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329151 4981 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329160 4981 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329170 4981 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329180 4981 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329190 4981 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329200 4981 flags.go:64] FLAG: --cgroup-root="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329210 4981 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329219 4981 flags.go:64] FLAG: --client-ca-file="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329228 4981 flags.go:64] FLAG: --cloud-config="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329238 4981 flags.go:64] FLAG: --cloud-provider="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329247 4981 flags.go:64] FLAG: --cluster-dns="[]" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329260 4981 flags.go:64] FLAG: --cluster-domain="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329269 4981 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329279 4981 flags.go:64] FLAG: --config-dir="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329288 4981 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329298 4981 flags.go:64] FLAG: --container-log-max-files="5" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329310 4981 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329319 4981 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329328 4981 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329338 4981 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329347 4981 flags.go:64] FLAG: --contention-profiling="false" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329357 4981 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329366 4981 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329375 4981 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329384 4981 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329395 4981 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329404 4981 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329414 4981 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329422 4981 flags.go:64] FLAG: --enable-load-reader="false" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329433 4981 flags.go:64] FLAG: --enable-server="true" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329442 4981 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329454 4981 flags.go:64] FLAG: --event-burst="100" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329463 4981 flags.go:64] FLAG: --event-qps="50" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329472 4981 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329506 4981 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329517 4981 flags.go:64] FLAG: --eviction-hard="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329528 4981 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329539 4981 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329548 4981 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329558 4981 flags.go:64] FLAG: --eviction-soft="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329567 4981 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329577 4981 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329586 4981 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329595 4981 flags.go:64] FLAG: --experimental-mounter-path="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329604 4981 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329614 4981 flags.go:64] FLAG: --fail-swap-on="true" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329623 4981 flags.go:64] FLAG: --feature-gates="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329634 4981 flags.go:64] FLAG: --file-check-frequency="20s" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329643 4981 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329652 4981 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329662 4981 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329672 4981 flags.go:64] FLAG: --healthz-port="10248" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329682 4981 flags.go:64] FLAG: --help="false" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329692 4981 flags.go:64] FLAG: --hostname-override="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329701 4981 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329710 4981 flags.go:64] FLAG: --http-check-frequency="20s" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329719 4981 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329728 4981 flags.go:64] FLAG: --image-credential-provider-config="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329737 4981 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329746 4981 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329755 4981 flags.go:64] FLAG: --image-service-endpoint="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329763 4981 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329772 4981 flags.go:64] FLAG: --kube-api-burst="100" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329781 4981 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329791 4981 flags.go:64] FLAG: --kube-api-qps="50" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329801 4981 flags.go:64] FLAG: --kube-reserved="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329810 4981 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329818 4981 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329828 4981 flags.go:64] FLAG: --kubelet-cgroups="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329837 4981 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329846 4981 flags.go:64] FLAG: --lock-file="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329854 4981 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329863 4981 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329874 4981 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329887 4981 flags.go:64] FLAG: --log-json-split-stream="false" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329896 4981 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329906 4981 flags.go:64] FLAG: --log-text-split-stream="false" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329915 4981 flags.go:64] FLAG: --logging-format="text" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329924 4981 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329934 4981 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329943 4981 flags.go:64] FLAG: --manifest-url="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329951 4981 flags.go:64] FLAG: --manifest-url-header="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329963 4981 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329972 4981 flags.go:64] FLAG: --max-open-files="1000000" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329983 4981 flags.go:64] FLAG: --max-pods="110" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.329992 4981 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330001 4981 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330010 4981 flags.go:64] FLAG: --memory-manager-policy="None" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330019 4981 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330028 4981 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330037 4981 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330046 4981 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330088 4981 flags.go:64] FLAG: --node-status-max-images="50" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330097 4981 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330106 4981 flags.go:64] FLAG: --oom-score-adj="-999" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330116 4981 flags.go:64] FLAG: --pod-cidr="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330125 4981 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330137 4981 flags.go:64] FLAG: --pod-manifest-path="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330146 4981 flags.go:64] FLAG: --pod-max-pids="-1" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330156 4981 flags.go:64] FLAG: --pods-per-core="0" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330165 4981 flags.go:64] FLAG: --port="10250" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330175 4981 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330184 4981 flags.go:64] FLAG: --provider-id="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330193 4981 flags.go:64] FLAG: --qos-reserved="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330202 4981 flags.go:64] FLAG: --read-only-port="10255" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330211 4981 flags.go:64] FLAG: --register-node="true" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330230 4981 flags.go:64] FLAG: --register-schedulable="true" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330239 4981 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330253 4981 flags.go:64] FLAG: --registry-burst="10" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330263 4981 flags.go:64] FLAG: --registry-qps="5" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330272 4981 flags.go:64] FLAG: --reserved-cpus="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330281 4981 flags.go:64] FLAG: --reserved-memory="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330292 4981 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330301 4981 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330311 4981 flags.go:64] FLAG: --rotate-certificates="false" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330322 4981 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330331 4981 flags.go:64] FLAG: --runonce="false" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330340 4981 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330349 4981 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330358 4981 flags.go:64] FLAG: --seccomp-default="false" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330368 4981 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330378 4981 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330387 4981 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330396 4981 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330406 4981 flags.go:64] FLAG: --storage-driver-password="root" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330415 4981 flags.go:64] FLAG: --storage-driver-secure="false" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330425 4981 flags.go:64] FLAG: --storage-driver-table="stats" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330434 4981 flags.go:64] FLAG: --storage-driver-user="root" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330443 4981 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330452 4981 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330462 4981 flags.go:64] FLAG: --system-cgroups="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330470 4981 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330484 4981 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330493 4981 flags.go:64] FLAG: --tls-cert-file="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330502 4981 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330514 4981 flags.go:64] FLAG: --tls-min-version="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330523 4981 flags.go:64] FLAG: --tls-private-key-file="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330533 4981 flags.go:64] FLAG: --topology-manager-policy="none" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330545 4981 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330555 4981 flags.go:64] FLAG: --topology-manager-scope="container" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330565 4981 flags.go:64] FLAG: --v="2" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330577 4981 flags.go:64] FLAG: --version="false" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330589 4981 flags.go:64] FLAG: --vmodule="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330600 4981 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.330610 4981 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.330852 4981 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.330864 4981 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.330873 4981 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.330882 4981 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.330892 4981 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.330900 4981 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.330909 4981 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.330920 4981 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.330951 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.330961 4981 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.330969 4981 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.330978 4981 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.330986 4981 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.330994 4981 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331002 4981 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331010 4981 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331018 4981 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331026 4981 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331034 4981 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331042 4981 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331050 4981 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331079 4981 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331088 4981 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331095 4981 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331103 4981 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331115 4981 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331123 4981 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331131 4981 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331141 4981 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331149 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331157 4981 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331165 4981 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331176 4981 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331185 4981 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331193 4981 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331201 4981 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331212 4981 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331222 4981 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331230 4981 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331239 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331247 4981 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331255 4981 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331263 4981 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331271 4981 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331279 4981 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331286 4981 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331295 4981 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331303 4981 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331329 4981 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331337 4981 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331345 4981 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331356 4981 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331366 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331375 4981 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331385 4981 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331393 4981 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331402 4981 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331413 4981 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331421 4981 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331430 4981 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331438 4981 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331445 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331453 4981 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331462 4981 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331471 4981 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331479 4981 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331487 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331495 4981 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331503 4981 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331511 4981 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.331519 4981 feature_gate.go:330] unrecognized feature gate: Example Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.331531 4981 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.346538 4981 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.346617 4981 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346780 4981 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346806 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346817 4981 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346826 4981 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346836 4981 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346845 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346852 4981 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346861 4981 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346870 4981 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346878 4981 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346885 4981 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346893 4981 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346901 4981 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346909 4981 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346917 4981 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346925 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346962 4981 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346970 4981 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346978 4981 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346986 4981 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.346994 4981 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347002 4981 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347010 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347018 4981 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347026 4981 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347035 4981 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347046 4981 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347085 4981 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347095 4981 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347105 4981 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347116 4981 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347127 4981 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347138 4981 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347148 4981 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347178 4981 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347190 4981 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347200 4981 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347208 4981 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347216 4981 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347225 4981 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347233 4981 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347240 4981 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347248 4981 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347259 4981 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347269 4981 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347278 4981 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347286 4981 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347294 4981 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347329 4981 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347340 4981 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347351 4981 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347362 4981 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347372 4981 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347381 4981 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347389 4981 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347397 4981 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347406 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347414 4981 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347421 4981 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347430 4981 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347438 4981 feature_gate.go:330] unrecognized feature gate: Example Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347446 4981 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347454 4981 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347462 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347470 4981 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347478 4981 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347486 4981 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347494 4981 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347502 4981 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347513 4981 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347535 4981 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.347548 4981 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347856 4981 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347879 4981 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347889 4981 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347898 4981 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347906 4981 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347914 4981 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347922 4981 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347929 4981 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347938 4981 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347958 4981 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347966 4981 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347974 4981 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347982 4981 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347990 4981 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.347997 4981 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348008 4981 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348020 4981 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348030 4981 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348042 4981 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348077 4981 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348087 4981 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348096 4981 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348105 4981 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348114 4981 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348123 4981 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348131 4981 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348139 4981 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348147 4981 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348155 4981 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348163 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348172 4981 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348180 4981 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348187 4981 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348195 4981 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348221 4981 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348229 4981 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348238 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348245 4981 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348253 4981 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348261 4981 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348269 4981 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348277 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348285 4981 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348295 4981 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348304 4981 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348315 4981 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348326 4981 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348339 4981 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348349 4981 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348358 4981 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348368 4981 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348378 4981 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348386 4981 feature_gate.go:330] unrecognized feature gate: Example Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348394 4981 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348402 4981 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348411 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348419 4981 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348427 4981 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348434 4981 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348442 4981 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348451 4981 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348459 4981 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348469 4981 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348477 4981 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348485 4981 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348493 4981 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348501 4981 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348508 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348516 4981 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348524 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.348545 4981 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.348559 4981 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.350491 4981 server.go:940] "Client rotation is on, will bootstrap in background" Feb 27 18:45:01 crc kubenswrapper[4981]: E0227 18:45:01.356604 4981 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.362082 4981 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.362279 4981 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.364700 4981 server.go:997] "Starting client certificate rotation" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.364756 4981 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.365120 4981 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.396097 4981 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 18:45:01 crc kubenswrapper[4981]: E0227 18:45:01.399795 4981 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.402451 4981 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.424250 4981 log.go:25] "Validated CRI v1 runtime API" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.467180 4981 log.go:25] "Validated CRI v1 image API" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.470298 4981 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.477584 4981 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-27-18-39-17-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.477688 4981 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.505689 4981 manager.go:217] Machine: {Timestamp:2026-02-27 18:45:01.501798117 +0000 UTC m=+0.980579307 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:adfb44cb-eacb-4bdb-ac3c-af6421f66947 BootID:1b99e48b-f223-4d99-b29b-1960f0d38aec Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:c3:70:50 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:c3:70:50 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:8c:95:33 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:cf:07:7e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:66:52:dc Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:45:16:ca Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:15:62:54 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:16:1d:e9:c5:7b:2d Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:92:da:ab:bf:ef:1e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.506454 4981 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.506700 4981 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.510586 4981 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.510891 4981 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.510973 4981 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.511409 4981 topology_manager.go:138] "Creating topology manager with none policy" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.511430 4981 container_manager_linux.go:303] "Creating device plugin manager" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.512133 4981 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.512169 4981 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.512475 4981 state_mem.go:36] "Initialized new in-memory state store" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.512601 4981 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.517404 4981 kubelet.go:418] "Attempting to sync node with API server" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.517446 4981 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.517483 4981 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.517512 4981 kubelet.go:324] "Adding apiserver pod source" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.517533 4981 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.523130 4981 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.526393 4981 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.526532 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.526544 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Feb 27 18:45:01 crc kubenswrapper[4981]: E0227 18:45:01.526648 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Feb 27 18:45:01 crc kubenswrapper[4981]: E0227 18:45:01.526658 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.537413 4981 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.543163 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.543219 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.543236 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.543251 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.543275 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.543288 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.543302 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.543324 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.543341 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.543357 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.543378 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.543391 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.543434 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.544153 4981 server.go:1280] "Started kubelet" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.545327 4981 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.545329 4981 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.546146 4981 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 27 18:45:01 crc systemd[1]: Started Kubernetes Kubelet. Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.546851 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.548355 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.548416 4981 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.548698 4981 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.548742 4981 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.548943 4981 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 27 18:45:01 crc kubenswrapper[4981]: E0227 18:45:01.549358 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:45:01 crc kubenswrapper[4981]: E0227 18:45:01.550017 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="200ms" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.550376 4981 factory.go:55] Registering systemd factory Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.550425 4981 factory.go:221] Registration of the systemd container factory successfully Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.550866 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Feb 27 18:45:01 crc kubenswrapper[4981]: E0227 18:45:01.551008 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.552103 4981 factory.go:153] Registering CRI-O factory Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.552158 4981 factory.go:221] Registration of the crio container factory successfully Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.552275 4981 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.552314 4981 factory.go:103] Registering Raw factory Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.552352 4981 manager.go:1196] Started watching for new ooms in manager Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.552393 4981 server.go:460] "Adding debug handlers to kubelet server" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.553839 4981 manager.go:319] Starting recovery of all containers Feb 27 18:45:01 crc kubenswrapper[4981]: E0227 18:45:01.553604 4981 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18982ecab61fe4b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.544105145 +0000 UTC m=+1.022886335,LastTimestamp:2026-02-27 18:45:01.544105145 +0000 UTC m=+1.022886335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.578881 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579106 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579149 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579180 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579210 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579239 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579271 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579301 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579338 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579367 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579397 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579427 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579457 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579491 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579519 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579550 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579577 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579606 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579636 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579664 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579691 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579718 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579748 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579777 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579806 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579838 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579876 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579908 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579938 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579964 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.579990 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580020 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580049 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580121 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580152 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580221 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580250 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580277 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580306 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580338 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580370 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580400 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580429 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580456 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580482 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580510 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580540 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580581 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580607 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580634 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580661 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580727 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580802 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580846 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580879 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580910 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580943 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.580972 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581123 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581158 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581184 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581212 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581245 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581279 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581310 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581340 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581371 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581399 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581424 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581450 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581477 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581506 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581527 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581550 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581570 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581597 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581620 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581643 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581671 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581698 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581722 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581747 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581779 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581802 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581886 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581936 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.581966 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.582000 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.582040 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.582107 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.582139 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.582171 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.582198 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.582223 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.582248 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.582275 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.582298 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.582322 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.582347 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.582370 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.582394 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.582418 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.582440 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.582469 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.585740 4981 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.585841 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.585881 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.585912 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.585945 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.585978 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586006 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586042 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586107 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586140 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586170 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586200 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586232 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586266 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586297 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586326 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586354 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586382 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586415 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586444 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586500 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586532 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586559 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586591 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586632 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586663 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586696 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586729 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586758 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586783 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586811 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586833 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586856 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586879 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586900 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586962 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.586985 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587007 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587029 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587102 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587133 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587170 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587198 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587226 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587253 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587285 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587312 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587336 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587379 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587407 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587443 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587474 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587503 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587531 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587562 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587603 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587636 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587666 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587693 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587731 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587758 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587796 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587833 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587874 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587906 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587931 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587959 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.587989 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588021 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588101 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588143 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588191 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588225 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588254 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588288 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588316 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588350 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588386 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588420 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588450 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588475 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588499 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588524 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588547 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588570 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588593 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588614 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588642 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588667 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588693 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588718 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588753 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588777 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588800 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588822 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588846 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588869 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588955 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588981 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.588990 4981 manager.go:324] Recovery completed Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.589008 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.589200 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.589265 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.589294 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.589319 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.589344 4981 reconstruct.go:97] "Volume reconstruction finished" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.589361 4981 reconciler.go:26] "Reconciler: start to sync state" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.606505 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.609621 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.609685 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.609708 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.611292 4981 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.611325 4981 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.611412 4981 state_mem.go:36] "Initialized new in-memory state store" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.624565 4981 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.627184 4981 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.627240 4981 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.627290 4981 kubelet.go:2335] "Starting kubelet main sync loop" Feb 27 18:45:01 crc kubenswrapper[4981]: E0227 18:45:01.627360 4981 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.628037 4981 policy_none.go:49] "None policy: Start" Feb 27 18:45:01 crc kubenswrapper[4981]: W0227 18:45:01.628645 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Feb 27 18:45:01 crc kubenswrapper[4981]: E0227 18:45:01.628724 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.629368 4981 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.629403 4981 state_mem.go:35] "Initializing new in-memory state store" Feb 27 18:45:01 crc kubenswrapper[4981]: E0227 18:45:01.649492 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.679254 4981 manager.go:334] "Starting Device Plugin manager" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.679317 4981 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.679337 4981 server.go:79] "Starting device plugin registration server" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.679905 4981 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.679931 4981 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.680790 4981 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.680903 4981 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.680914 4981 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 27 18:45:01 crc kubenswrapper[4981]: E0227 18:45:01.691951 4981 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.728261 4981 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.728382 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.730008 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.730099 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.730121 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.730448 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.730735 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.730796 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.732157 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.732234 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.732255 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.732551 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.732612 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.732672 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.732705 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.732817 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.732866 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.734739 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.734776 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.734793 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.734799 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.734850 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.734866 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.735103 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.735144 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.735192 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.736640 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.736725 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.736739 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.737372 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.737416 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.737426 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.737700 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.737869 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.738105 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.739658 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.739782 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.739809 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.740577 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.740652 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.740676 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.741035 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.741136 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.744015 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.744098 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.744109 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:01 crc kubenswrapper[4981]: E0227 18:45:01.751432 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="400ms" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.780045 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.781662 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.781754 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.781776 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.781850 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 18:45:01 crc kubenswrapper[4981]: E0227 18:45:01.782925 4981 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.791945 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.791997 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.792025 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.792070 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.792101 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.792157 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.792209 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.792236 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.792285 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.792335 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.792360 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.792477 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.792505 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.792601 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.792627 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.894580 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.894650 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.894692 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.894726 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.894763 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.894800 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.894848 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.894856 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.894906 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.894857 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.894975 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.894935 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.895018 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.894993 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.894856 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.895077 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.895049 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.894967 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.895339 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.895417 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.895469 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.895488 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.895561 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.895598 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.895608 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.895686 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.895758 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.895708 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.895683 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.895711 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.983814 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.985989 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.986108 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.986124 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:01 crc kubenswrapper[4981]: I0227 18:45:01.986166 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 18:45:01 crc kubenswrapper[4981]: E0227 18:45:01.987103 4981 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Feb 27 18:45:02 crc kubenswrapper[4981]: I0227 18:45:02.063665 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 18:45:02 crc kubenswrapper[4981]: I0227 18:45:02.079228 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 27 18:45:02 crc kubenswrapper[4981]: I0227 18:45:02.094605 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:02 crc kubenswrapper[4981]: I0227 18:45:02.109130 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:45:02 crc kubenswrapper[4981]: I0227 18:45:02.115232 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 18:45:02 crc kubenswrapper[4981]: E0227 18:45:02.152795 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="800ms" Feb 27 18:45:02 crc kubenswrapper[4981]: W0227 18:45:02.153489 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-4670dd916e00a2b6fe03842e1913205f800df0e445b15da3d927e60c71384829 WatchSource:0}: Error finding container 4670dd916e00a2b6fe03842e1913205f800df0e445b15da3d927e60c71384829: Status 404 returned error can't find the container with id 4670dd916e00a2b6fe03842e1913205f800df0e445b15da3d927e60c71384829 Feb 27 18:45:02 crc kubenswrapper[4981]: W0227 18:45:02.161140 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-2fe5796360c17573ad0dc7b601617adfd082a7b58d7820f564f2e46282e5c828 WatchSource:0}: Error finding container 2fe5796360c17573ad0dc7b601617adfd082a7b58d7820f564f2e46282e5c828: Status 404 returned error can't find the container with id 2fe5796360c17573ad0dc7b601617adfd082a7b58d7820f564f2e46282e5c828 Feb 27 18:45:02 crc kubenswrapper[4981]: W0227 18:45:02.166900 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-aca0ca2c47168b4ab3a04cf47be6355b5941e221faf9d554dc21943b597af47c WatchSource:0}: Error finding container aca0ca2c47168b4ab3a04cf47be6355b5941e221faf9d554dc21943b597af47c: Status 404 returned error can't find the container with id aca0ca2c47168b4ab3a04cf47be6355b5941e221faf9d554dc21943b597af47c Feb 27 18:45:02 crc kubenswrapper[4981]: W0227 18:45:02.170671 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-dfb86cc7a52566491a1ceba6580c3bfc87983f73f0b754542b3706fa65233e40 WatchSource:0}: Error finding container dfb86cc7a52566491a1ceba6580c3bfc87983f73f0b754542b3706fa65233e40: Status 404 returned error can't find the container with id dfb86cc7a52566491a1ceba6580c3bfc87983f73f0b754542b3706fa65233e40 Feb 27 18:45:02 crc kubenswrapper[4981]: W0227 18:45:02.172031 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-bcc09083b675b4935672d6d2da34918ae00cdf06977fab206dfb5f321a8f365a WatchSource:0}: Error finding container bcc09083b675b4935672d6d2da34918ae00cdf06977fab206dfb5f321a8f365a: Status 404 returned error can't find the container with id bcc09083b675b4935672d6d2da34918ae00cdf06977fab206dfb5f321a8f365a Feb 27 18:45:02 crc kubenswrapper[4981]: I0227 18:45:02.387438 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:02 crc kubenswrapper[4981]: I0227 18:45:02.390632 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:02 crc kubenswrapper[4981]: I0227 18:45:02.390711 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:02 crc kubenswrapper[4981]: I0227 18:45:02.390733 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:02 crc kubenswrapper[4981]: I0227 18:45:02.390785 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 18:45:02 crc kubenswrapper[4981]: E0227 18:45:02.391554 4981 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Feb 27 18:45:02 crc kubenswrapper[4981]: I0227 18:45:02.547865 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Feb 27 18:45:02 crc kubenswrapper[4981]: W0227 18:45:02.560166 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Feb 27 18:45:02 crc kubenswrapper[4981]: E0227 18:45:02.560369 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Feb 27 18:45:02 crc kubenswrapper[4981]: I0227 18:45:02.639472 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4670dd916e00a2b6fe03842e1913205f800df0e445b15da3d927e60c71384829"} Feb 27 18:45:02 crc kubenswrapper[4981]: I0227 18:45:02.641821 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bcc09083b675b4935672d6d2da34918ae00cdf06977fab206dfb5f321a8f365a"} Feb 27 18:45:02 crc kubenswrapper[4981]: I0227 18:45:02.643469 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"dfb86cc7a52566491a1ceba6580c3bfc87983f73f0b754542b3706fa65233e40"} Feb 27 18:45:02 crc kubenswrapper[4981]: I0227 18:45:02.645786 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aca0ca2c47168b4ab3a04cf47be6355b5941e221faf9d554dc21943b597af47c"} Feb 27 18:45:02 crc kubenswrapper[4981]: I0227 18:45:02.647097 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2fe5796360c17573ad0dc7b601617adfd082a7b58d7820f564f2e46282e5c828"} Feb 27 18:45:02 crc kubenswrapper[4981]: W0227 18:45:02.910439 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Feb 27 18:45:02 crc kubenswrapper[4981]: E0227 18:45:02.910579 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Feb 27 18:45:02 crc kubenswrapper[4981]: E0227 18:45:02.953851 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="1.6s" Feb 27 18:45:02 crc kubenswrapper[4981]: W0227 18:45:02.982760 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Feb 27 18:45:02 crc kubenswrapper[4981]: E0227 18:45:02.982895 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Feb 27 18:45:03 crc kubenswrapper[4981]: W0227 18:45:03.029432 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Feb 27 18:45:03 crc kubenswrapper[4981]: E0227 18:45:03.029515 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.192154 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.194226 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.194292 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.194315 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.194368 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 18:45:03 crc kubenswrapper[4981]: E0227 18:45:03.195167 4981 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.488880 4981 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 18:45:03 crc kubenswrapper[4981]: E0227 18:45:03.490432 4981 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.548031 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.656372 4981 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3" exitCode=0 Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.656567 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3"} Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.656602 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.660300 4981 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455" exitCode=0 Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.660381 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455"} Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.660691 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.661808 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.661872 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.661892 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.662583 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.662627 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.662645 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.664112 4981 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973" exitCode=0 Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.664157 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973"} Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.664286 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.664688 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.666017 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.666089 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.666108 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.666212 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.666254 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.666272 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.668671 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dc26ccdc87598f2980e1dc35395d3846450da6de6c1818de589d95441798e232"} Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.668722 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e33e23de51fc20ecaa8ed6d4c7f561de316c8b2adf3f2eb94b0f4284c0ae982a"} Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.668750 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"47da708cea4fc8dee9c3d6ac4bb7473cdc255bfed85666ddf72c1b49d93d94ec"} Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.670828 4981 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688" exitCode=0 Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.670874 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688"} Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.671001 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.672400 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.672432 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:03 crc kubenswrapper[4981]: I0227 18:45:03.672445 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:04 crc kubenswrapper[4981]: W0227 18:45:04.489601 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Feb 27 18:45:04 crc kubenswrapper[4981]: E0227 18:45:04.489727 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.548520 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Feb 27 18:45:04 crc kubenswrapper[4981]: E0227 18:45:04.555911 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="3.2s" Feb 27 18:45:04 crc kubenswrapper[4981]: W0227 18:45:04.637644 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.38:6443: connect: connection refused Feb 27 18:45:04 crc kubenswrapper[4981]: E0227 18:45:04.637818 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.38:6443: connect: connection refused" logger="UnhandledError" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.679273 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"64f5edf9f42b84e2bf4ad4c6a88f9b2a14c248fb3fc1f821c9720d310d5d4a28"} Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.679337 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.680690 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.680752 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.680772 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.682471 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8094de35e827134fa5b9827035972708e7deb8d5ed21e43051209cd7da41af18"} Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.682548 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0d5e73486c7d159ec69ff6b39c2a2a93dfe4fa7cdf5b62bdfb91a6bb5d8b2e47"} Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.682573 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4b78d032b71162dda8d49cc9cd3a6febb3da5326777dc926b9656919b41d320e"} Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.682732 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.684168 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.684225 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.684250 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.690913 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58"} Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.690966 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e"} Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.690990 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812"} Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.693495 4981 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866" exitCode=0 Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.693544 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866"} Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.693764 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.699626 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.699724 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.699758 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.711042 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe"} Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.711241 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.713511 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.713552 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.713571 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.802626 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.804507 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.804549 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.804566 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:04 crc kubenswrapper[4981]: I0227 18:45:04.804596 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 18:45:04 crc kubenswrapper[4981]: E0227 18:45:04.805309 4981 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.38:6443: connect: connection refused" node="crc" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.718848 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"66ea83eb04abe6d4d39081d7867578cd868c99c59fd2b5d7b0ee45af26b5cde8"} Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.718920 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03"} Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.718978 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.720617 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.720676 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.720699 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.722475 4981 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a" exitCode=0 Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.722648 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a"} Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.722690 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.722746 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.722693 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.724007 4981 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.724129 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.724966 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.725019 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.725048 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.725045 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.725104 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.725155 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.725174 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.725125 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.725240 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.727991 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.728032 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:05 crc kubenswrapper[4981]: I0227 18:45:05.728077 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:06 crc kubenswrapper[4981]: I0227 18:45:06.733179 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3"} Feb 27 18:45:06 crc kubenswrapper[4981]: I0227 18:45:06.733256 4981 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 18:45:06 crc kubenswrapper[4981]: I0227 18:45:06.733337 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:06 crc kubenswrapper[4981]: I0227 18:45:06.733271 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df"} Feb 27 18:45:06 crc kubenswrapper[4981]: I0227 18:45:06.733411 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8"} Feb 27 18:45:06 crc kubenswrapper[4981]: I0227 18:45:06.734532 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:06 crc kubenswrapper[4981]: I0227 18:45:06.734559 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:06 crc kubenswrapper[4981]: I0227 18:45:06.734569 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:07 crc kubenswrapper[4981]: I0227 18:45:07.744974 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9"} Feb 27 18:45:07 crc kubenswrapper[4981]: I0227 18:45:07.745407 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381"} Feb 27 18:45:07 crc kubenswrapper[4981]: I0227 18:45:07.745203 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:07 crc kubenswrapper[4981]: I0227 18:45:07.750122 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:07 crc kubenswrapper[4981]: I0227 18:45:07.750187 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:07 crc kubenswrapper[4981]: I0227 18:45:07.750208 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:07 crc kubenswrapper[4981]: I0227 18:45:07.879624 4981 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.005586 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.007497 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.007553 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.007573 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.007609 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.137238 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.137613 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.139377 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.139459 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.139481 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.311153 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.311443 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.313561 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.313610 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.313624 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.747631 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.749418 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.749479 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.749493 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.752809 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.753085 4981 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.753163 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.754612 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.754657 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:08 crc kubenswrapper[4981]: I0227 18:45:08.754675 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:09 crc kubenswrapper[4981]: I0227 18:45:09.084185 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:09 crc kubenswrapper[4981]: I0227 18:45:09.752352 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:09 crc kubenswrapper[4981]: I0227 18:45:09.753916 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:09 crc kubenswrapper[4981]: I0227 18:45:09.754041 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:09 crc kubenswrapper[4981]: I0227 18:45:09.754087 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:09 crc kubenswrapper[4981]: I0227 18:45:09.852602 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 27 18:45:09 crc kubenswrapper[4981]: I0227 18:45:09.852931 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:09 crc kubenswrapper[4981]: I0227 18:45:09.854992 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:09 crc kubenswrapper[4981]: I0227 18:45:09.855083 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:09 crc kubenswrapper[4981]: I0227 18:45:09.855103 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:09 crc kubenswrapper[4981]: I0227 18:45:09.884842 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:09 crc kubenswrapper[4981]: I0227 18:45:09.891361 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:45:09 crc kubenswrapper[4981]: I0227 18:45:09.891484 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:09 crc kubenswrapper[4981]: I0227 18:45:09.893196 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:09 crc kubenswrapper[4981]: I0227 18:45:09.893262 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:09 crc kubenswrapper[4981]: I0227 18:45:09.893281 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:09 crc kubenswrapper[4981]: I0227 18:45:09.899621 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:45:10 crc kubenswrapper[4981]: I0227 18:45:10.759021 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:10 crc kubenswrapper[4981]: I0227 18:45:10.759556 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:10 crc kubenswrapper[4981]: I0227 18:45:10.761747 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:10 crc kubenswrapper[4981]: I0227 18:45:10.761799 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:10 crc kubenswrapper[4981]: I0227 18:45:10.761823 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:10 crc kubenswrapper[4981]: I0227 18:45:10.762136 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:10 crc kubenswrapper[4981]: I0227 18:45:10.762191 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:10 crc kubenswrapper[4981]: I0227 18:45:10.762213 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:11 crc kubenswrapper[4981]: E0227 18:45:11.692519 4981 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 18:45:11 crc kubenswrapper[4981]: I0227 18:45:11.882808 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 27 18:45:11 crc kubenswrapper[4981]: I0227 18:45:11.883150 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:11 crc kubenswrapper[4981]: I0227 18:45:11.890452 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:11 crc kubenswrapper[4981]: I0227 18:45:11.890520 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:11 crc kubenswrapper[4981]: I0227 18:45:11.890548 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:12 crc kubenswrapper[4981]: I0227 18:45:12.001505 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:45:12 crc kubenswrapper[4981]: I0227 18:45:12.001788 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:12 crc kubenswrapper[4981]: I0227 18:45:12.004423 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:12 crc kubenswrapper[4981]: I0227 18:45:12.004491 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:12 crc kubenswrapper[4981]: I0227 18:45:12.004515 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:12 crc kubenswrapper[4981]: I0227 18:45:12.009725 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:45:12 crc kubenswrapper[4981]: I0227 18:45:12.595889 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:45:12 crc kubenswrapper[4981]: I0227 18:45:12.764482 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:12 crc kubenswrapper[4981]: I0227 18:45:12.765850 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:12 crc kubenswrapper[4981]: I0227 18:45:12.765946 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:12 crc kubenswrapper[4981]: I0227 18:45:12.765972 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:13 crc kubenswrapper[4981]: I0227 18:45:13.768642 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:13 crc kubenswrapper[4981]: I0227 18:45:13.770770 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:13 crc kubenswrapper[4981]: I0227 18:45:13.770829 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:13 crc kubenswrapper[4981]: I0227 18:45:13.770848 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:15 crc kubenswrapper[4981]: I0227 18:45:15.002122 4981 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 18:45:15 crc kubenswrapper[4981]: I0227 18:45:15.002247 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 18:45:15 crc kubenswrapper[4981]: W0227 18:45:15.485135 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 27 18:45:15 crc kubenswrapper[4981]: I0227 18:45:15.485343 4981 trace.go:236] Trace[467753118]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Feb-2026 18:45:05.483) (total time: 10001ms): Feb 27 18:45:15 crc kubenswrapper[4981]: Trace[467753118]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:45:15.485) Feb 27 18:45:15 crc kubenswrapper[4981]: Trace[467753118]: [10.001652035s] [10.001652035s] END Feb 27 18:45:15 crc kubenswrapper[4981]: E0227 18:45:15.485386 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 27 18:45:15 crc kubenswrapper[4981]: I0227 18:45:15.549444 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 27 18:45:15 crc kubenswrapper[4981]: W0227 18:45:15.925641 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 27 18:45:15 crc kubenswrapper[4981]: I0227 18:45:15.925786 4981 trace.go:236] Trace[1742038680]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Feb-2026 18:45:05.923) (total time: 10002ms): Feb 27 18:45:15 crc kubenswrapper[4981]: Trace[1742038680]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (18:45:15.925) Feb 27 18:45:15 crc kubenswrapper[4981]: Trace[1742038680]: [10.002158882s] [10.002158882s] END Feb 27 18:45:15 crc kubenswrapper[4981]: E0227 18:45:15.925826 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 27 18:45:16 crc kubenswrapper[4981]: E0227 18:45:16.738892 4981 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:16Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.18982ecab61fe4b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.544105145 +0000 UTC m=+1.022886335,LastTimestamp:2026-02-27 18:45:01.544105145 +0000 UTC m=+1.022886335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:16 crc kubenswrapper[4981]: E0227 18:45:16.744089 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:16Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 27 18:45:16 crc kubenswrapper[4981]: E0227 18:45:16.747087 4981 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:16Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 18:45:16 crc kubenswrapper[4981]: E0227 18:45:16.748898 4981 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:16Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 18:45:16 crc kubenswrapper[4981]: W0227 18:45:16.750364 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:16Z is after 2026-02-23T05:33:13Z Feb 27 18:45:16 crc kubenswrapper[4981]: E0227 18:45:16.750461 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:16Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 18:45:16 crc kubenswrapper[4981]: W0227 18:45:16.752368 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:16Z is after 2026-02-23T05:33:13Z Feb 27 18:45:16 crc kubenswrapper[4981]: E0227 18:45:16.752503 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:16Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 18:45:16 crc kubenswrapper[4981]: I0227 18:45:16.756716 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:16Z is after 2026-02-23T05:33:13Z Feb 27 18:45:16 crc kubenswrapper[4981]: I0227 18:45:16.760034 4981 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 27 18:45:16 crc kubenswrapper[4981]: I0227 18:45:16.760163 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 27 18:45:16 crc kubenswrapper[4981]: I0227 18:45:16.767248 4981 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 27 18:45:16 crc kubenswrapper[4981]: I0227 18:45:16.767317 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 27 18:45:17 crc kubenswrapper[4981]: I0227 18:45:17.553854 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:17Z is after 2026-02-23T05:33:13Z Feb 27 18:45:17 crc kubenswrapper[4981]: I0227 18:45:17.784296 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 27 18:45:17 crc kubenswrapper[4981]: I0227 18:45:17.786568 4981 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="66ea83eb04abe6d4d39081d7867578cd868c99c59fd2b5d7b0ee45af26b5cde8" exitCode=255 Feb 27 18:45:17 crc kubenswrapper[4981]: I0227 18:45:17.786628 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"66ea83eb04abe6d4d39081d7867578cd868c99c59fd2b5d7b0ee45af26b5cde8"} Feb 27 18:45:17 crc kubenswrapper[4981]: I0227 18:45:17.786841 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:17 crc kubenswrapper[4981]: I0227 18:45:17.788102 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:17 crc kubenswrapper[4981]: I0227 18:45:17.788167 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:17 crc kubenswrapper[4981]: I0227 18:45:17.788182 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:17 crc kubenswrapper[4981]: I0227 18:45:17.789103 4981 scope.go:117] "RemoveContainer" containerID="66ea83eb04abe6d4d39081d7867578cd868c99c59fd2b5d7b0ee45af26b5cde8" Feb 27 18:45:18 crc kubenswrapper[4981]: I0227 18:45:18.077319 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:18 crc kubenswrapper[4981]: I0227 18:45:18.554727 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:18Z is after 2026-02-23T05:33:13Z Feb 27 18:45:18 crc kubenswrapper[4981]: I0227 18:45:18.760871 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:18 crc kubenswrapper[4981]: I0227 18:45:18.793149 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 18:45:18 crc kubenswrapper[4981]: I0227 18:45:18.794333 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 27 18:45:18 crc kubenswrapper[4981]: I0227 18:45:18.798155 4981 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a16aad9bc3aa9de9f6af220736500145e4e937e4eaa290e1b4d6f1dd24e23534" exitCode=255 Feb 27 18:45:18 crc kubenswrapper[4981]: I0227 18:45:18.798216 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a16aad9bc3aa9de9f6af220736500145e4e937e4eaa290e1b4d6f1dd24e23534"} Feb 27 18:45:18 crc kubenswrapper[4981]: I0227 18:45:18.798312 4981 scope.go:117] "RemoveContainer" containerID="66ea83eb04abe6d4d39081d7867578cd868c99c59fd2b5d7b0ee45af26b5cde8" Feb 27 18:45:18 crc kubenswrapper[4981]: I0227 18:45:18.798325 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:18 crc kubenswrapper[4981]: I0227 18:45:18.799640 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:18 crc kubenswrapper[4981]: I0227 18:45:18.799691 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:18 crc kubenswrapper[4981]: I0227 18:45:18.799710 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:18 crc kubenswrapper[4981]: I0227 18:45:18.800785 4981 scope.go:117] "RemoveContainer" containerID="a16aad9bc3aa9de9f6af220736500145e4e937e4eaa290e1b4d6f1dd24e23534" Feb 27 18:45:18 crc kubenswrapper[4981]: E0227 18:45:18.801128 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 18:45:18 crc kubenswrapper[4981]: I0227 18:45:18.806734 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:19 crc kubenswrapper[4981]: I0227 18:45:19.084872 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:19 crc kubenswrapper[4981]: I0227 18:45:19.552321 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:19Z is after 2026-02-23T05:33:13Z Feb 27 18:45:19 crc kubenswrapper[4981]: I0227 18:45:19.802580 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 18:45:19 crc kubenswrapper[4981]: I0227 18:45:19.806890 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:19 crc kubenswrapper[4981]: I0227 18:45:19.808233 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:19 crc kubenswrapper[4981]: I0227 18:45:19.808293 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:19 crc kubenswrapper[4981]: I0227 18:45:19.808311 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:19 crc kubenswrapper[4981]: I0227 18:45:19.809180 4981 scope.go:117] "RemoveContainer" containerID="a16aad9bc3aa9de9f6af220736500145e4e937e4eaa290e1b4d6f1dd24e23534" Feb 27 18:45:19 crc kubenswrapper[4981]: E0227 18:45:19.809456 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 18:45:20 crc kubenswrapper[4981]: I0227 18:45:20.553133 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:20Z is after 2026-02-23T05:33:13Z Feb 27 18:45:20 crc kubenswrapper[4981]: I0227 18:45:20.810099 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:20 crc kubenswrapper[4981]: I0227 18:45:20.812304 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:20 crc kubenswrapper[4981]: I0227 18:45:20.812344 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:20 crc kubenswrapper[4981]: I0227 18:45:20.812356 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:20 crc kubenswrapper[4981]: I0227 18:45:20.813031 4981 scope.go:117] "RemoveContainer" containerID="a16aad9bc3aa9de9f6af220736500145e4e937e4eaa290e1b4d6f1dd24e23534" Feb 27 18:45:20 crc kubenswrapper[4981]: E0227 18:45:20.813219 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 18:45:20 crc kubenswrapper[4981]: W0227 18:45:20.860160 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:20Z is after 2026-02-23T05:33:13Z Feb 27 18:45:20 crc kubenswrapper[4981]: E0227 18:45:20.860233 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:20Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 18:45:21 crc kubenswrapper[4981]: W0227 18:45:21.424439 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:21Z is after 2026-02-23T05:33:13Z Feb 27 18:45:21 crc kubenswrapper[4981]: E0227 18:45:21.424577 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:21Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 18:45:21 crc kubenswrapper[4981]: I0227 18:45:21.550479 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:21Z is after 2026-02-23T05:33:13Z Feb 27 18:45:21 crc kubenswrapper[4981]: E0227 18:45:21.692825 4981 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 18:45:21 crc kubenswrapper[4981]: I0227 18:45:21.923991 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 27 18:45:21 crc kubenswrapper[4981]: I0227 18:45:21.924356 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:21 crc kubenswrapper[4981]: I0227 18:45:21.926568 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:21 crc kubenswrapper[4981]: I0227 18:45:21.926622 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:21 crc kubenswrapper[4981]: I0227 18:45:21.926635 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:21 crc kubenswrapper[4981]: I0227 18:45:21.947520 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 27 18:45:22 crc kubenswrapper[4981]: I0227 18:45:22.552325 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:22Z is after 2026-02-23T05:33:13Z Feb 27 18:45:22 crc kubenswrapper[4981]: I0227 18:45:22.816479 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:22 crc kubenswrapper[4981]: I0227 18:45:22.817853 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:22 crc kubenswrapper[4981]: I0227 18:45:22.817897 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:22 crc kubenswrapper[4981]: I0227 18:45:22.817912 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:23 crc kubenswrapper[4981]: I0227 18:45:23.149591 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:23 crc kubenswrapper[4981]: E0227 18:45:23.149662 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:23Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 18:45:23 crc kubenswrapper[4981]: I0227 18:45:23.151532 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:23 crc kubenswrapper[4981]: I0227 18:45:23.151587 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:23 crc kubenswrapper[4981]: I0227 18:45:23.151606 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:23 crc kubenswrapper[4981]: I0227 18:45:23.151641 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 18:45:23 crc kubenswrapper[4981]: E0227 18:45:23.156526 4981 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:23Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 18:45:23 crc kubenswrapper[4981]: I0227 18:45:23.553222 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:23Z is after 2026-02-23T05:33:13Z Feb 27 18:45:24 crc kubenswrapper[4981]: I0227 18:45:24.552790 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:24Z is after 2026-02-23T05:33:13Z Feb 27 18:45:24 crc kubenswrapper[4981]: I0227 18:45:24.750049 4981 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 18:45:24 crc kubenswrapper[4981]: E0227 18:45:24.755845 4981 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:24Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 18:45:25 crc kubenswrapper[4981]: I0227 18:45:25.002778 4981 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 18:45:25 crc kubenswrapper[4981]: I0227 18:45:25.002864 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 18:45:25 crc kubenswrapper[4981]: I0227 18:45:25.552840 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:25Z is after 2026-02-23T05:33:13Z Feb 27 18:45:26 crc kubenswrapper[4981]: I0227 18:45:26.552674 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:26Z is after 2026-02-23T05:33:13Z Feb 27 18:45:26 crc kubenswrapper[4981]: E0227 18:45:26.745589 4981 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:26Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.18982ecab61fe4b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.544105145 +0000 UTC m=+1.022886335,LastTimestamp:2026-02-27 18:45:01.544105145 +0000 UTC m=+1.022886335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:27 crc kubenswrapper[4981]: I0227 18:45:27.554725 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:27Z is after 2026-02-23T05:33:13Z Feb 27 18:45:27 crc kubenswrapper[4981]: W0227 18:45:27.967800 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:27Z is after 2026-02-23T05:33:13Z Feb 27 18:45:27 crc kubenswrapper[4981]: E0227 18:45:27.967919 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:27Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 18:45:28 crc kubenswrapper[4981]: I0227 18:45:28.077338 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:28 crc kubenswrapper[4981]: I0227 18:45:28.077720 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:28 crc kubenswrapper[4981]: I0227 18:45:28.079581 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:28 crc kubenswrapper[4981]: I0227 18:45:28.079641 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:28 crc kubenswrapper[4981]: I0227 18:45:28.079663 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:28 crc kubenswrapper[4981]: I0227 18:45:28.080631 4981 scope.go:117] "RemoveContainer" containerID="a16aad9bc3aa9de9f6af220736500145e4e937e4eaa290e1b4d6f1dd24e23534" Feb 27 18:45:28 crc kubenswrapper[4981]: E0227 18:45:28.080931 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 18:45:28 crc kubenswrapper[4981]: I0227 18:45:28.550639 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:28Z is after 2026-02-23T05:33:13Z Feb 27 18:45:29 crc kubenswrapper[4981]: W0227 18:45:29.523382 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:29Z is after 2026-02-23T05:33:13Z Feb 27 18:45:29 crc kubenswrapper[4981]: E0227 18:45:29.523533 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:29Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 18:45:29 crc kubenswrapper[4981]: I0227 18:45:29.551964 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:29Z is after 2026-02-23T05:33:13Z Feb 27 18:45:30 crc kubenswrapper[4981]: E0227 18:45:30.155629 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:30Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 18:45:30 crc kubenswrapper[4981]: I0227 18:45:30.156698 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:30 crc kubenswrapper[4981]: I0227 18:45:30.159607 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:30 crc kubenswrapper[4981]: I0227 18:45:30.159683 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:30 crc kubenswrapper[4981]: I0227 18:45:30.159712 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:30 crc kubenswrapper[4981]: I0227 18:45:30.159759 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 18:45:30 crc kubenswrapper[4981]: E0227 18:45:30.165521 4981 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:30Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 18:45:30 crc kubenswrapper[4981]: I0227 18:45:30.552814 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:30Z is after 2026-02-23T05:33:13Z Feb 27 18:45:31 crc kubenswrapper[4981]: W0227 18:45:31.239855 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:31Z is after 2026-02-23T05:33:13Z Feb 27 18:45:31 crc kubenswrapper[4981]: E0227 18:45:31.239996 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:31Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 18:45:31 crc kubenswrapper[4981]: I0227 18:45:31.552451 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:31Z is after 2026-02-23T05:33:13Z Feb 27 18:45:31 crc kubenswrapper[4981]: E0227 18:45:31.693034 4981 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 18:45:32 crc kubenswrapper[4981]: I0227 18:45:32.553158 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:32Z is after 2026-02-23T05:33:13Z Feb 27 18:45:32 crc kubenswrapper[4981]: W0227 18:45:32.770730 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:32Z is after 2026-02-23T05:33:13Z Feb 27 18:45:32 crc kubenswrapper[4981]: E0227 18:45:32.770862 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:32Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 18:45:33 crc kubenswrapper[4981]: I0227 18:45:33.551339 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:33Z is after 2026-02-23T05:33:13Z Feb 27 18:45:34 crc kubenswrapper[4981]: I0227 18:45:34.214738 4981 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:41196->192.168.126.11:10357: read: connection reset by peer" start-of-body= Feb 27 18:45:34 crc kubenswrapper[4981]: I0227 18:45:34.216179 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:41196->192.168.126.11:10357: read: connection reset by peer" Feb 27 18:45:34 crc kubenswrapper[4981]: I0227 18:45:34.216305 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:45:34 crc kubenswrapper[4981]: I0227 18:45:34.216660 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:34 crc kubenswrapper[4981]: I0227 18:45:34.219217 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:34 crc kubenswrapper[4981]: I0227 18:45:34.219272 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:34 crc kubenswrapper[4981]: I0227 18:45:34.219298 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:34 crc kubenswrapper[4981]: I0227 18:45:34.220157 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"e33e23de51fc20ecaa8ed6d4c7f561de316c8b2adf3f2eb94b0f4284c0ae982a"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 27 18:45:34 crc kubenswrapper[4981]: I0227 18:45:34.220413 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://e33e23de51fc20ecaa8ed6d4c7f561de316c8b2adf3f2eb94b0f4284c0ae982a" gracePeriod=30 Feb 27 18:45:34 crc kubenswrapper[4981]: I0227 18:45:34.553308 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:34Z is after 2026-02-23T05:33:13Z Feb 27 18:45:34 crc kubenswrapper[4981]: I0227 18:45:34.864443 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 18:45:34 crc kubenswrapper[4981]: I0227 18:45:34.865854 4981 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e33e23de51fc20ecaa8ed6d4c7f561de316c8b2adf3f2eb94b0f4284c0ae982a" exitCode=255 Feb 27 18:45:34 crc kubenswrapper[4981]: I0227 18:45:34.865945 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e33e23de51fc20ecaa8ed6d4c7f561de316c8b2adf3f2eb94b0f4284c0ae982a"} Feb 27 18:45:34 crc kubenswrapper[4981]: I0227 18:45:34.866013 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e33ee2f6a8a6b33109ce0e0c8edea9d353fcadcb447dab73fc0763ca9f484a25"} Feb 27 18:45:34 crc kubenswrapper[4981]: I0227 18:45:34.866273 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:34 crc kubenswrapper[4981]: I0227 18:45:34.867836 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:34 crc kubenswrapper[4981]: I0227 18:45:34.867888 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:34 crc kubenswrapper[4981]: I0227 18:45:34.867910 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:35 crc kubenswrapper[4981]: I0227 18:45:35.552803 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:35Z is after 2026-02-23T05:33:13Z Feb 27 18:45:36 crc kubenswrapper[4981]: I0227 18:45:36.550599 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:36Z is after 2026-02-23T05:33:13Z Feb 27 18:45:36 crc kubenswrapper[4981]: E0227 18:45:36.750917 4981 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:36Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.18982ecab61fe4b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.544105145 +0000 UTC m=+1.022886335,LastTimestamp:2026-02-27 18:45:01.544105145 +0000 UTC m=+1.022886335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:37 crc kubenswrapper[4981]: E0227 18:45:37.161696 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:37Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 18:45:37 crc kubenswrapper[4981]: I0227 18:45:37.165748 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:37 crc kubenswrapper[4981]: I0227 18:45:37.167443 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:37 crc kubenswrapper[4981]: I0227 18:45:37.167500 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:37 crc kubenswrapper[4981]: I0227 18:45:37.167519 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:37 crc kubenswrapper[4981]: I0227 18:45:37.167554 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 18:45:37 crc kubenswrapper[4981]: E0227 18:45:37.170747 4981 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:37Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 18:45:37 crc kubenswrapper[4981]: I0227 18:45:37.552291 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:37Z is after 2026-02-23T05:33:13Z Feb 27 18:45:38 crc kubenswrapper[4981]: I0227 18:45:38.552385 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:38Z is after 2026-02-23T05:33:13Z Feb 27 18:45:39 crc kubenswrapper[4981]: I0227 18:45:39.552308 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:39Z is after 2026-02-23T05:33:13Z Feb 27 18:45:40 crc kubenswrapper[4981]: I0227 18:45:40.553130 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:40Z is after 2026-02-23T05:33:13Z Feb 27 18:45:40 crc kubenswrapper[4981]: I0227 18:45:40.918810 4981 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 18:45:40 crc kubenswrapper[4981]: E0227 18:45:40.924734 4981 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:40Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 18:45:40 crc kubenswrapper[4981]: E0227 18:45:40.926000 4981 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Feb 27 18:45:41 crc kubenswrapper[4981]: I0227 18:45:41.552659 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:41Z is after 2026-02-23T05:33:13Z Feb 27 18:45:41 crc kubenswrapper[4981]: E0227 18:45:41.694691 4981 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 18:45:42 crc kubenswrapper[4981]: I0227 18:45:42.002174 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:45:42 crc kubenswrapper[4981]: I0227 18:45:42.002541 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:42 crc kubenswrapper[4981]: I0227 18:45:42.004191 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:42 crc kubenswrapper[4981]: I0227 18:45:42.004251 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:42 crc kubenswrapper[4981]: I0227 18:45:42.004279 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:42 crc kubenswrapper[4981]: I0227 18:45:42.552756 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:42Z is after 2026-02-23T05:33:13Z Feb 27 18:45:42 crc kubenswrapper[4981]: I0227 18:45:42.595645 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:45:42 crc kubenswrapper[4981]: I0227 18:45:42.628356 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:42 crc kubenswrapper[4981]: I0227 18:45:42.629911 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:42 crc kubenswrapper[4981]: I0227 18:45:42.630076 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:42 crc kubenswrapper[4981]: I0227 18:45:42.630141 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:42 crc kubenswrapper[4981]: I0227 18:45:42.630981 4981 scope.go:117] "RemoveContainer" containerID="a16aad9bc3aa9de9f6af220736500145e4e937e4eaa290e1b4d6f1dd24e23534" Feb 27 18:45:42 crc kubenswrapper[4981]: I0227 18:45:42.897724 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 18:45:42 crc kubenswrapper[4981]: I0227 18:45:42.900524 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:42 crc kubenswrapper[4981]: I0227 18:45:42.902439 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:42 crc kubenswrapper[4981]: I0227 18:45:42.902493 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:42 crc kubenswrapper[4981]: I0227 18:45:42.902512 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:43 crc kubenswrapper[4981]: I0227 18:45:43.553552 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:43Z is after 2026-02-23T05:33:13Z Feb 27 18:45:43 crc kubenswrapper[4981]: I0227 18:45:43.906874 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 18:45:43 crc kubenswrapper[4981]: I0227 18:45:43.908181 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 18:45:43 crc kubenswrapper[4981]: I0227 18:45:43.912028 4981 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="62ebbefe9f60cab1b3da6260ca3b106ec4dfa0f623f33ea611acd1ba50cf68fa" exitCode=255 Feb 27 18:45:43 crc kubenswrapper[4981]: I0227 18:45:43.912108 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"62ebbefe9f60cab1b3da6260ca3b106ec4dfa0f623f33ea611acd1ba50cf68fa"} Feb 27 18:45:43 crc kubenswrapper[4981]: I0227 18:45:43.912198 4981 scope.go:117] "RemoveContainer" containerID="a16aad9bc3aa9de9f6af220736500145e4e937e4eaa290e1b4d6f1dd24e23534" Feb 27 18:45:43 crc kubenswrapper[4981]: I0227 18:45:43.912419 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:43 crc kubenswrapper[4981]: I0227 18:45:43.913886 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:43 crc kubenswrapper[4981]: I0227 18:45:43.913976 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:43 crc kubenswrapper[4981]: I0227 18:45:43.914008 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:43 crc kubenswrapper[4981]: I0227 18:45:43.915170 4981 scope.go:117] "RemoveContainer" containerID="62ebbefe9f60cab1b3da6260ca3b106ec4dfa0f623f33ea611acd1ba50cf68fa" Feb 27 18:45:43 crc kubenswrapper[4981]: E0227 18:45:43.915557 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 18:45:44 crc kubenswrapper[4981]: E0227 18:45:44.167895 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:44Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 18:45:44 crc kubenswrapper[4981]: I0227 18:45:44.171036 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:44 crc kubenswrapper[4981]: I0227 18:45:44.173445 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:44 crc kubenswrapper[4981]: I0227 18:45:44.173495 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:44 crc kubenswrapper[4981]: I0227 18:45:44.173509 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:44 crc kubenswrapper[4981]: I0227 18:45:44.173542 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 18:45:44 crc kubenswrapper[4981]: E0227 18:45:44.178546 4981 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:44Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 18:45:44 crc kubenswrapper[4981]: I0227 18:45:44.552340 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:44Z is after 2026-02-23T05:33:13Z Feb 27 18:45:44 crc kubenswrapper[4981]: W0227 18:45:44.803415 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:44Z is after 2026-02-23T05:33:13Z Feb 27 18:45:44 crc kubenswrapper[4981]: E0227 18:45:44.803544 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:44Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 18:45:44 crc kubenswrapper[4981]: I0227 18:45:44.918627 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 18:45:45 crc kubenswrapper[4981]: I0227 18:45:45.003040 4981 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 18:45:45 crc kubenswrapper[4981]: I0227 18:45:45.003168 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 18:45:45 crc kubenswrapper[4981]: W0227 18:45:45.346289 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:45Z is after 2026-02-23T05:33:13Z Feb 27 18:45:45 crc kubenswrapper[4981]: E0227 18:45:45.346412 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:45Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 18:45:45 crc kubenswrapper[4981]: I0227 18:45:45.553300 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:45Z is after 2026-02-23T05:33:13Z Feb 27 18:45:45 crc kubenswrapper[4981]: W0227 18:45:45.851000 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:45Z is after 2026-02-23T05:33:13Z Feb 27 18:45:45 crc kubenswrapper[4981]: E0227 18:45:45.851198 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:45Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 18:45:46 crc kubenswrapper[4981]: I0227 18:45:46.552085 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:46Z is after 2026-02-23T05:33:13Z Feb 27 18:45:46 crc kubenswrapper[4981]: E0227 18:45:46.756144 4981 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:46Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.18982ecab61fe4b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.544105145 +0000 UTC m=+1.022886335,LastTimestamp:2026-02-27 18:45:01.544105145 +0000 UTC m=+1.022886335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:47 crc kubenswrapper[4981]: I0227 18:45:47.552086 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:47Z is after 2026-02-23T05:33:13Z Feb 27 18:45:48 crc kubenswrapper[4981]: I0227 18:45:48.077399 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:48 crc kubenswrapper[4981]: I0227 18:45:48.077700 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:48 crc kubenswrapper[4981]: I0227 18:45:48.079403 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:48 crc kubenswrapper[4981]: I0227 18:45:48.079449 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:48 crc kubenswrapper[4981]: I0227 18:45:48.079468 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:48 crc kubenswrapper[4981]: I0227 18:45:48.080533 4981 scope.go:117] "RemoveContainer" containerID="62ebbefe9f60cab1b3da6260ca3b106ec4dfa0f623f33ea611acd1ba50cf68fa" Feb 27 18:45:48 crc kubenswrapper[4981]: E0227 18:45:48.080884 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 18:45:48 crc kubenswrapper[4981]: I0227 18:45:48.319226 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 18:45:48 crc kubenswrapper[4981]: I0227 18:45:48.319483 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:48 crc kubenswrapper[4981]: I0227 18:45:48.321445 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:48 crc kubenswrapper[4981]: I0227 18:45:48.321502 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:48 crc kubenswrapper[4981]: I0227 18:45:48.321521 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:48 crc kubenswrapper[4981]: I0227 18:45:48.552959 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:48Z is after 2026-02-23T05:33:13Z Feb 27 18:45:49 crc kubenswrapper[4981]: I0227 18:45:49.083715 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:45:49 crc kubenswrapper[4981]: I0227 18:45:49.083960 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:49 crc kubenswrapper[4981]: I0227 18:45:49.085721 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:49 crc kubenswrapper[4981]: I0227 18:45:49.085778 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:49 crc kubenswrapper[4981]: I0227 18:45:49.085797 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:49 crc kubenswrapper[4981]: I0227 18:45:49.086729 4981 scope.go:117] "RemoveContainer" containerID="62ebbefe9f60cab1b3da6260ca3b106ec4dfa0f623f33ea611acd1ba50cf68fa" Feb 27 18:45:49 crc kubenswrapper[4981]: E0227 18:45:49.087040 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 18:45:49 crc kubenswrapper[4981]: I0227 18:45:49.552256 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:49Z is after 2026-02-23T05:33:13Z Feb 27 18:45:50 crc kubenswrapper[4981]: I0227 18:45:50.552235 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:50Z is after 2026-02-23T05:33:13Z Feb 27 18:45:51 crc kubenswrapper[4981]: E0227 18:45:51.173793 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:51Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 18:45:51 crc kubenswrapper[4981]: I0227 18:45:51.178882 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:51 crc kubenswrapper[4981]: I0227 18:45:51.180820 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:51 crc kubenswrapper[4981]: I0227 18:45:51.180880 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:51 crc kubenswrapper[4981]: I0227 18:45:51.180902 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:51 crc kubenswrapper[4981]: I0227 18:45:51.180948 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 18:45:51 crc kubenswrapper[4981]: E0227 18:45:51.186123 4981 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:51Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 18:45:51 crc kubenswrapper[4981]: I0227 18:45:51.552987 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:51Z is after 2026-02-23T05:33:13Z Feb 27 18:45:51 crc kubenswrapper[4981]: E0227 18:45:51.695003 4981 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 18:45:52 crc kubenswrapper[4981]: I0227 18:45:52.552452 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:52Z is after 2026-02-23T05:33:13Z Feb 27 18:45:53 crc kubenswrapper[4981]: I0227 18:45:53.552322 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:53Z is after 2026-02-23T05:33:13Z Feb 27 18:45:54 crc kubenswrapper[4981]: I0227 18:45:54.552687 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:54Z is after 2026-02-23T05:33:13Z Feb 27 18:45:55 crc kubenswrapper[4981]: I0227 18:45:55.003109 4981 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 18:45:55 crc kubenswrapper[4981]: I0227 18:45:55.003222 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 18:45:55 crc kubenswrapper[4981]: W0227 18:45:55.464940 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 27 18:45:55 crc kubenswrapper[4981]: E0227 18:45:55.465086 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 27 18:45:55 crc kubenswrapper[4981]: I0227 18:45:55.551784 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:45:56 crc kubenswrapper[4981]: I0227 18:45:56.554136 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.764526 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecab61fe4b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.544105145 +0000 UTC m=+1.022886335,LastTimestamp:2026-02-27 18:45:01.544105145 +0000 UTC m=+1.022886335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.771475 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba084f62 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.60966845 +0000 UTC m=+1.088449640,LastTimestamp:2026-02-27 18:45:01.60966845 +0000 UTC m=+1.088449640,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.778553 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba08c925 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.609699621 +0000 UTC m=+1.088480811,LastTimestamp:2026-02-27 18:45:01.609699621 +0000 UTC m=+1.088480811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.785583 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba091033 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.609717811 +0000 UTC m=+1.088499011,LastTimestamp:2026-02-27 18:45:01.609717811 +0000 UTC m=+1.088499011,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.792263 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecabe741d68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.683842408 +0000 UTC m=+1.162623578,LastTimestamp:2026-02-27 18:45:01.683842408 +0000 UTC m=+1.162623578,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.800652 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18982ecaba084f62\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba084f62 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.60966845 +0000 UTC m=+1.088449640,LastTimestamp:2026-02-27 18:45:01.730045225 +0000 UTC m=+1.208826425,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.808650 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18982ecaba08c925\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba08c925 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.609699621 +0000 UTC m=+1.088480811,LastTimestamp:2026-02-27 18:45:01.730113397 +0000 UTC m=+1.208894597,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.815739 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18982ecaba091033\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba091033 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.609717811 +0000 UTC m=+1.088499011,LastTimestamp:2026-02-27 18:45:01.730131127 +0000 UTC m=+1.208912317,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.826991 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18982ecaba084f62\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba084f62 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.60966845 +0000 UTC m=+1.088449640,LastTimestamp:2026-02-27 18:45:01.732218812 +0000 UTC m=+1.211000012,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.834384 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18982ecaba08c925\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba08c925 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.609699621 +0000 UTC m=+1.088480811,LastTimestamp:2026-02-27 18:45:01.732248143 +0000 UTC m=+1.211029343,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.840702 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18982ecaba091033\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba091033 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.609717811 +0000 UTC m=+1.088499011,LastTimestamp:2026-02-27 18:45:01.732266343 +0000 UTC m=+1.211047543,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.844944 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18982ecaba084f62\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba084f62 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.60966845 +0000 UTC m=+1.088449640,LastTimestamp:2026-02-27 18:45:01.732648212 +0000 UTC m=+1.211429412,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.852411 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18982ecaba08c925\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba08c925 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.609699621 +0000 UTC m=+1.088480811,LastTimestamp:2026-02-27 18:45:01.732690493 +0000 UTC m=+1.211471693,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.859191 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18982ecaba091033\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba091033 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.609717811 +0000 UTC m=+1.088499011,LastTimestamp:2026-02-27 18:45:01.732716174 +0000 UTC m=+1.211497374,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.866378 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18982ecaba084f62\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba084f62 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.60966845 +0000 UTC m=+1.088449640,LastTimestamp:2026-02-27 18:45:01.734763117 +0000 UTC m=+1.213544307,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.873610 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18982ecaba08c925\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba08c925 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.609699621 +0000 UTC m=+1.088480811,LastTimestamp:2026-02-27 18:45:01.734786818 +0000 UTC m=+1.213568018,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.880336 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18982ecaba091033\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba091033 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.609717811 +0000 UTC m=+1.088499011,LastTimestamp:2026-02-27 18:45:01.734803418 +0000 UTC m=+1.213584618,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.885630 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18982ecaba084f62\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba084f62 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.60966845 +0000 UTC m=+1.088449640,LastTimestamp:2026-02-27 18:45:01.734836119 +0000 UTC m=+1.213617289,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.891276 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18982ecaba08c925\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba08c925 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.609699621 +0000 UTC m=+1.088480811,LastTimestamp:2026-02-27 18:45:01.73485931 +0000 UTC m=+1.213640480,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.897145 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18982ecaba091033\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba091033 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.609717811 +0000 UTC m=+1.088499011,LastTimestamp:2026-02-27 18:45:01.7348738 +0000 UTC m=+1.213654960,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.902140 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18982ecaba084f62\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba084f62 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.60966845 +0000 UTC m=+1.088449640,LastTimestamp:2026-02-27 18:45:01.736660806 +0000 UTC m=+1.215441976,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.908407 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18982ecaba08c925\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba08c925 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.609699621 +0000 UTC m=+1.088480811,LastTimestamp:2026-02-27 18:45:01.736734438 +0000 UTC m=+1.215515598,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.912990 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18982ecaba091033\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba091033 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.609717811 +0000 UTC m=+1.088499011,LastTimestamp:2026-02-27 18:45:01.736745858 +0000 UTC m=+1.215527018,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.919945 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18982ecaba084f62\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba084f62 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.60966845 +0000 UTC m=+1.088449640,LastTimestamp:2026-02-27 18:45:01.737404485 +0000 UTC m=+1.216185645,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.924611 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18982ecaba08c925\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18982ecaba08c925 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:01.609699621 +0000 UTC m=+1.088480811,LastTimestamp:2026-02-27 18:45:01.737423726 +0000 UTC m=+1.216204886,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.933471 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18982ecadaefc2d7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:02.161707735 +0000 UTC m=+1.640488905,LastTimestamp:2026-02-27 18:45:02.161707735 +0000 UTC m=+1.640488905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.938268 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecadb4cde60 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:02.167809632 +0000 UTC m=+1.646590782,LastTimestamp:2026-02-27 18:45:02.167809632 +0000 UTC m=+1.646590782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.944775 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecadb684c5a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:02.169607258 +0000 UTC m=+1.648388448,LastTimestamp:2026-02-27 18:45:02.169607258 +0000 UTC m=+1.648388448,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.948595 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18982ecadbc64fe3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:02.175768547 +0000 UTC m=+1.654549737,LastTimestamp:2026-02-27 18:45:02.175768547 +0000 UTC m=+1.654549737,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.952579 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982ecadbcd7077 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:02.176235639 +0000 UTC m=+1.655016789,LastTimestamp:2026-02-27 18:45:02.176235639 +0000 UTC m=+1.655016789,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.955606 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18982ecb04da28e1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:02.864935137 +0000 UTC m=+2.343716337,LastTimestamp:2026-02-27 18:45:02.864935137 +0000 UTC m=+2.343716337,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.961249 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982ecb04f20b7d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:02.866500477 +0000 UTC m=+2.345281667,LastTimestamp:2026-02-27 18:45:02.866500477 +0000 UTC m=+2.345281667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.962763 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecb0510b4c2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:02.86850989 +0000 UTC m=+2.347291090,LastTimestamp:2026-02-27 18:45:02.86850989 +0000 UTC m=+2.347291090,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.969102 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18982ecb052d0bf1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:02.870367217 +0000 UTC m=+2.349148407,LastTimestamp:2026-02-27 18:45:02.870367217 +0000 UTC m=+2.349148407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.975836 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecb0539ee27 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:02.871211559 +0000 UTC m=+2.349992749,LastTimestamp:2026-02-27 18:45:02.871211559 +0000 UTC m=+2.349992749,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.980486 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18982ecb05f14e3f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:02.883229247 +0000 UTC m=+2.362010447,LastTimestamp:2026-02-27 18:45:02.883229247 +0000 UTC m=+2.362010447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.984650 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982ecb060f604d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:02.885199949 +0000 UTC m=+2.363981139,LastTimestamp:2026-02-27 18:45:02.885199949 +0000 UTC m=+2.363981139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:56 crc kubenswrapper[4981]: E0227 18:45:56.996607 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecb0621a811 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:02.886397969 +0000 UTC m=+2.365179169,LastTimestamp:2026-02-27 18:45:02.886397969 +0000 UTC m=+2.365179169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.001563 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982ecb0628955b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:02.886851931 +0000 UTC m=+2.365633131,LastTimestamp:2026-02-27 18:45:02.886851931 +0000 UTC m=+2.365633131,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.006994 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18982ecb06504e7c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:02.889455228 +0000 UTC m=+2.368236418,LastTimestamp:2026-02-27 18:45:02.889455228 +0000 UTC m=+2.368236418,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.013531 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecb06a7b94d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:02.895184205 +0000 UTC m=+2.373965395,LastTimestamp:2026-02-27 18:45:02.895184205 +0000 UTC m=+2.373965395,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.020387 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982ecb1b58cf27 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.242333991 +0000 UTC m=+2.721115191,LastTimestamp:2026-02-27 18:45:03.242333991 +0000 UTC m=+2.721115191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.026317 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982ecb1c1a4cf8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.255014648 +0000 UTC m=+2.733795848,LastTimestamp:2026-02-27 18:45:03.255014648 +0000 UTC m=+2.733795848,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.030557 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982ecb1c31a20d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.256543757 +0000 UTC m=+2.735324957,LastTimestamp:2026-02-27 18:45:03.256543757 +0000 UTC m=+2.735324957,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.035169 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982ecb2ba75092 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.515914386 +0000 UTC m=+2.994695576,LastTimestamp:2026-02-27 18:45:03.515914386 +0000 UTC m=+2.994695576,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.039764 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982ecb2c8dbd8c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.531015564 +0000 UTC m=+3.009796764,LastTimestamp:2026-02-27 18:45:03.531015564 +0000 UTC m=+3.009796764,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.045641 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982ecb2ca47a2b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.532505643 +0000 UTC m=+3.011286843,LastTimestamp:2026-02-27 18:45:03.532505643 +0000 UTC m=+3.011286843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.051882 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecb348179a2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.664429474 +0000 UTC m=+3.143210664,LastTimestamp:2026-02-27 18:45:03.664429474 +0000 UTC m=+3.143210664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.058388 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecb34848c74 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.6646309 +0000 UTC m=+3.143412090,LastTimestamp:2026-02-27 18:45:03.6646309 +0000 UTC m=+3.143412090,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.066570 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18982ecb34b9daa0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.66812432 +0000 UTC m=+3.146905490,LastTimestamp:2026-02-27 18:45:03.66812432 +0000 UTC m=+3.146905490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.073691 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18982ecb35c7569c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.685785244 +0000 UTC m=+3.164566444,LastTimestamp:2026-02-27 18:45:03.685785244 +0000 UTC m=+3.164566444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.077499 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982ecb3d712518 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.8143542 +0000 UTC m=+3.293135370,LastTimestamp:2026-02-27 18:45:03.8143542 +0000 UTC m=+3.293135370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.081961 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982ecb3e38baf2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.827434226 +0000 UTC m=+3.306215396,LastTimestamp:2026-02-27 18:45:03.827434226 +0000 UTC m=+3.306215396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.085893 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18982ecb43b0ef17 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.919197975 +0000 UTC m=+3.397979145,LastTimestamp:2026-02-27 18:45:03.919197975 +0000 UTC m=+3.397979145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.091801 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18982ecb440fb029 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.925407785 +0000 UTC m=+3.404188955,LastTimestamp:2026-02-27 18:45:03.925407785 +0000 UTC m=+3.404188955,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.096650 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecb443c382e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.92832619 +0000 UTC m=+3.407107360,LastTimestamp:2026-02-27 18:45:03.92832619 +0000 UTC m=+3.407107360,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.100942 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecb443e9226 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.928480294 +0000 UTC m=+3.407261464,LastTimestamp:2026-02-27 18:45:03.928480294 +0000 UTC m=+3.407261464,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.106045 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18982ecb44c35140 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.937179968 +0000 UTC m=+3.415961138,LastTimestamp:2026-02-27 18:45:03.937179968 +0000 UTC m=+3.415961138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.110338 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18982ecb44d775d5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.938500053 +0000 UTC m=+3.417281223,LastTimestamp:2026-02-27 18:45:03.938500053 +0000 UTC m=+3.417281223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.116176 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18982ecb450c0144 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.94194362 +0000 UTC m=+3.420724790,LastTimestamp:2026-02-27 18:45:03.94194362 +0000 UTC m=+3.420724790,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.122518 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecb45518fa0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.946502048 +0000 UTC m=+3.425283198,LastTimestamp:2026-02-27 18:45:03.946502048 +0000 UTC m=+3.425283198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.126717 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecb457f3475 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.949493365 +0000 UTC m=+3.428274525,LastTimestamp:2026-02-27 18:45:03.949493365 +0000 UTC m=+3.428274525,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.133795 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecb47c5182b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.987628075 +0000 UTC m=+3.466409235,LastTimestamp:2026-02-27 18:45:03.987628075 +0000 UTC m=+3.466409235,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.140264 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18982ecb51d18a85 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.156215941 +0000 UTC m=+3.634997101,LastTimestamp:2026-02-27 18:45:04.156215941 +0000 UTC m=+3.634997101,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.144394 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecb52f566e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.175343332 +0000 UTC m=+3.654124492,LastTimestamp:2026-02-27 18:45:04.175343332 +0000 UTC m=+3.654124492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.149070 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18982ecb536871c4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.182882756 +0000 UTC m=+3.661663916,LastTimestamp:2026-02-27 18:45:04.182882756 +0000 UTC m=+3.661663916,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.155082 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18982ecb537983af openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.184001455 +0000 UTC m=+3.662782615,LastTimestamp:2026-02-27 18:45:04.184001455 +0000 UTC m=+3.662782615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.160959 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecb5469cfae openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.19974955 +0000 UTC m=+3.678530710,LastTimestamp:2026-02-27 18:45:04.19974955 +0000 UTC m=+3.678530710,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.166483 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecb547f0410 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.201139216 +0000 UTC m=+3.679920376,LastTimestamp:2026-02-27 18:45:04.201139216 +0000 UTC m=+3.679920376,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.171966 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18982ecb62252558 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.43013052 +0000 UTC m=+3.908911690,LastTimestamp:2026-02-27 18:45:04.43013052 +0000 UTC m=+3.908911690,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.176777 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecb626f0681 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.434972289 +0000 UTC m=+3.913753459,LastTimestamp:2026-02-27 18:45:04.434972289 +0000 UTC m=+3.913753459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.182646 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18982ecb638cf607 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.453711367 +0000 UTC m=+3.932492537,LastTimestamp:2026-02-27 18:45:04.453711367 +0000 UTC m=+3.932492537,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.187280 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecb63f9f708 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.460855048 +0000 UTC m=+3.939636218,LastTimestamp:2026-02-27 18:45:04.460855048 +0000 UTC m=+3.939636218,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.191858 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecb6418e566 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.46288215 +0000 UTC m=+3.941663350,LastTimestamp:2026-02-27 18:45:04.46288215 +0000 UTC m=+3.941663350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.199447 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecb72b301aa openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.707862954 +0000 UTC m=+4.186644144,LastTimestamp:2026-02-27 18:45:04.707862954 +0000 UTC m=+4.186644144,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.206434 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecb736d8bc0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.720088 +0000 UTC m=+4.198869170,LastTimestamp:2026-02-27 18:45:04.720088 +0000 UTC m=+4.198869170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.212872 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecb74b09d00 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.741260544 +0000 UTC m=+4.220041734,LastTimestamp:2026-02-27 18:45:04.741260544 +0000 UTC m=+4.220041734,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.219770 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecb74d810b0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.743846064 +0000 UTC m=+4.222627234,LastTimestamp:2026-02-27 18:45:04.743846064 +0000 UTC m=+4.222627234,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.226492 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecb810b72c3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.948540099 +0000 UTC m=+4.427321259,LastTimestamp:2026-02-27 18:45:04.948540099 +0000 UTC m=+4.427321259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.230807 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecb81168d58 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.9492678 +0000 UTC m=+4.428048960,LastTimestamp:2026-02-27 18:45:04.9492678 +0000 UTC m=+4.428048960,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.237495 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecb81c56922 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.96072733 +0000 UTC m=+4.439508490,LastTimestamp:2026-02-27 18:45:04.96072733 +0000 UTC m=+4.439508490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.243619 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecb820a439c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.965239708 +0000 UTC m=+4.444020868,LastTimestamp:2026-02-27 18:45:04.965239708 +0000 UTC m=+4.444020868,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.250895 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecbaf6e03d8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:05.726751704 +0000 UTC m=+5.205532894,LastTimestamp:2026-02-27 18:45:05.726751704 +0000 UTC m=+5.205532894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.257287 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecbbe688685 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:05.978050181 +0000 UTC m=+5.456831371,LastTimestamp:2026-02-27 18:45:05.978050181 +0000 UTC m=+5.456831371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.262035 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecbbf0f7cd7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:05.988992215 +0000 UTC m=+5.467773405,LastTimestamp:2026-02-27 18:45:05.988992215 +0000 UTC m=+5.467773405,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.268144 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecbbf296f59 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:05.990692697 +0000 UTC m=+5.469473887,LastTimestamp:2026-02-27 18:45:05.990692697 +0000 UTC m=+5.469473887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.277315 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecbceff48b0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:06.256365744 +0000 UTC m=+5.735146944,LastTimestamp:2026-02-27 18:45:06.256365744 +0000 UTC m=+5.735146944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.284125 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecbcfdd3f65 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:06.270912357 +0000 UTC m=+5.749693557,LastTimestamp:2026-02-27 18:45:06.270912357 +0000 UTC m=+5.749693557,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.290414 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecbcff8fa99 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:06.272729753 +0000 UTC m=+5.751510953,LastTimestamp:2026-02-27 18:45:06.272729753 +0000 UTC m=+5.751510953,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.296715 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecbde22deff openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:06.510356223 +0000 UTC m=+5.989137413,LastTimestamp:2026-02-27 18:45:06.510356223 +0000 UTC m=+5.989137413,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.301759 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecbdf1837ac openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:06.526435244 +0000 UTC m=+6.005216434,LastTimestamp:2026-02-27 18:45:06.526435244 +0000 UTC m=+6.005216434,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.308375 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecbdf318d39 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:06.528095545 +0000 UTC m=+6.006876705,LastTimestamp:2026-02-27 18:45:06.528095545 +0000 UTC m=+6.006876705,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.314858 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecbedbe9ad9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:06.772220633 +0000 UTC m=+6.251001833,LastTimestamp:2026-02-27 18:45:06.772220633 +0000 UTC m=+6.251001833,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.321048 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecbeeb86784 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:06.788591492 +0000 UTC m=+6.267372682,LastTimestamp:2026-02-27 18:45:06.788591492 +0000 UTC m=+6.267372682,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.332545 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecbeed71911 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:06.790603025 +0000 UTC m=+6.269384215,LastTimestamp:2026-02-27 18:45:06.790603025 +0000 UTC m=+6.269384215,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.339777 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecbfe331033 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:07.048288307 +0000 UTC m=+6.527069507,LastTimestamp:2026-02-27 18:45:07.048288307 +0000 UTC m=+6.527069507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.346571 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982ecbff46caec openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:07.066358508 +0000 UTC m=+6.545139708,LastTimestamp:2026-02-27 18:45:07.066358508 +0000 UTC m=+6.545139708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.358877 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 18:45:57 crc kubenswrapper[4981]: &Event{ObjectMeta:{kube-controller-manager-crc.18982ecdd84a2f62 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 27 18:45:57 crc kubenswrapper[4981]: body: Feb 27 18:45:57 crc kubenswrapper[4981]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:15.002204002 +0000 UTC m=+14.480985192,LastTimestamp:2026-02-27 18:45:15.002204002 +0000 UTC m=+14.480985192,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 18:45:57 crc kubenswrapper[4981]: > Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.362913 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982ecdd84ba0cf openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:15.002298575 +0000 UTC m=+14.481079775,LastTimestamp:2026-02-27 18:45:15.002298575 +0000 UTC m=+14.481079775,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.364603 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 27 18:45:57 crc kubenswrapper[4981]: &Event{ObjectMeta:{kube-apiserver-crc.18982ece4111f4e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 27 18:45:57 crc kubenswrapper[4981]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 27 18:45:57 crc kubenswrapper[4981]: Feb 27 18:45:57 crc kubenswrapper[4981]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:16.76012669 +0000 UTC m=+16.238907890,LastTimestamp:2026-02-27 18:45:16.76012669 +0000 UTC m=+16.238907890,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 18:45:57 crc kubenswrapper[4981]: > Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.371744 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ece41130c58 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:16.760198232 +0000 UTC m=+16.238979432,LastTimestamp:2026-02-27 18:45:16.760198232 +0000 UTC m=+16.238979432,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.379117 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18982ece4111f4e2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 27 18:45:57 crc kubenswrapper[4981]: &Event{ObjectMeta:{kube-apiserver-crc.18982ece4111f4e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 27 18:45:57 crc kubenswrapper[4981]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 27 18:45:57 crc kubenswrapper[4981]: Feb 27 18:45:57 crc kubenswrapper[4981]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:16.76012669 +0000 UTC m=+16.238907890,LastTimestamp:2026-02-27 18:45:16.767292338 +0000 UTC m=+16.246073538,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 18:45:57 crc kubenswrapper[4981]: > Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.383342 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18982ece41130c58\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ece41130c58 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:16.760198232 +0000 UTC m=+16.238979432,LastTimestamp:2026-02-27 18:45:16.7673496 +0000 UTC m=+16.246130790,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.388354 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18982ecb74d810b0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecb74d810b0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.743846064 +0000 UTC m=+4.222627234,LastTimestamp:2026-02-27 18:45:17.790804949 +0000 UTC m=+17.269586119,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.393285 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18982ecb810b72c3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecb810b72c3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.948540099 +0000 UTC m=+4.427321259,LastTimestamp:2026-02-27 18:45:18.00255644 +0000 UTC m=+17.481337640,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.398274 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18982ecb81c56922\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982ecb81c56922 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:04.96072733 +0000 UTC m=+4.439508490,LastTimestamp:2026-02-27 18:45:18.020889789 +0000 UTC m=+17.499670959,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.408815 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 18:45:57 crc kubenswrapper[4981]: &Event{ObjectMeta:{kube-controller-manager-crc.18982ed02c5fc0f9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 18:45:57 crc kubenswrapper[4981]: body: Feb 27 18:45:57 crc kubenswrapper[4981]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:25.002838265 +0000 UTC m=+24.481619455,LastTimestamp:2026-02-27 18:45:25.002838265 +0000 UTC m=+24.481619455,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 18:45:57 crc kubenswrapper[4981]: > Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.415272 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982ed02c60aed5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:25.002899157 +0000 UTC m=+24.481680357,LastTimestamp:2026-02-27 18:45:25.002899157 +0000 UTC m=+24.481680357,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.420471 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 18:45:57 crc kubenswrapper[4981]: &Event{ObjectMeta:{kube-controller-manager-crc.18982ed251879b8f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": read tcp 192.168.126.11:41196->192.168.126.11:10357: read: connection reset by peer Feb 27 18:45:57 crc kubenswrapper[4981]: body: Feb 27 18:45:57 crc kubenswrapper[4981]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:34.216141711 +0000 UTC m=+33.694922921,LastTimestamp:2026-02-27 18:45:34.216141711 +0000 UTC m=+33.694922921,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 18:45:57 crc kubenswrapper[4981]: > Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.424977 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982ed2518956f9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:41196->192.168.126.11:10357: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:34.216255225 +0000 UTC m=+33.695036425,LastTimestamp:2026-02-27 18:45:34.216255225 +0000 UTC m=+33.695036425,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.432287 4981 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982ed251c8690e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:34.220388622 +0000 UTC m=+33.699169822,LastTimestamp:2026-02-27 18:45:34.220388622 +0000 UTC m=+33.699169822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.439169 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18982ecb0628955b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982ecb0628955b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:02.886851931 +0000 UTC m=+2.365633131,LastTimestamp:2026-02-27 18:45:34.251733526 +0000 UTC m=+33.730514686,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.445585 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18982ecb1b58cf27\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982ecb1b58cf27 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.242333991 +0000 UTC m=+2.721115191,LastTimestamp:2026-02-27 18:45:34.512863029 +0000 UTC m=+33.991644229,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.452356 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18982ecb1c1a4cf8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982ecb1c1a4cf8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:03.255014648 +0000 UTC m=+2.733795848,LastTimestamp:2026-02-27 18:45:34.52723099 +0000 UTC m=+34.006012190,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.464983 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18982ed02c5fc0f9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 18:45:57 crc kubenswrapper[4981]: &Event{ObjectMeta:{kube-controller-manager-crc.18982ed02c5fc0f9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 18:45:57 crc kubenswrapper[4981]: body: Feb 27 18:45:57 crc kubenswrapper[4981]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:25.002838265 +0000 UTC m=+24.481619455,LastTimestamp:2026-02-27 18:45:45.003134523 +0000 UTC m=+44.481915713,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 18:45:57 crc kubenswrapper[4981]: > Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.466575 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18982ed02c60aed5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982ed02c60aed5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:25.002899157 +0000 UTC m=+24.481680357,LastTimestamp:2026-02-27 18:45:45.003203725 +0000 UTC m=+44.481984915,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:45:57 crc kubenswrapper[4981]: E0227 18:45:57.473561 4981 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18982ed02c5fc0f9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 18:45:57 crc kubenswrapper[4981]: &Event{ObjectMeta:{kube-controller-manager-crc.18982ed02c5fc0f9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 18:45:57 crc kubenswrapper[4981]: body: Feb 27 18:45:57 crc kubenswrapper[4981]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:45:25.002838265 +0000 UTC m=+24.481619455,LastTimestamp:2026-02-27 18:45:55.003188775 +0000 UTC m=+54.481969965,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 18:45:57 crc kubenswrapper[4981]: > Feb 27 18:45:57 crc kubenswrapper[4981]: I0227 18:45:57.554232 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:45:58 crc kubenswrapper[4981]: E0227 18:45:58.181767 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 18:45:58 crc kubenswrapper[4981]: I0227 18:45:58.186826 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:45:58 crc kubenswrapper[4981]: I0227 18:45:58.188834 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:45:58 crc kubenswrapper[4981]: I0227 18:45:58.188896 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:45:58 crc kubenswrapper[4981]: I0227 18:45:58.188916 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:45:58 crc kubenswrapper[4981]: I0227 18:45:58.188957 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 18:45:58 crc kubenswrapper[4981]: E0227 18:45:58.195781 4981 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 18:45:58 crc kubenswrapper[4981]: I0227 18:45:58.554718 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:45:59 crc kubenswrapper[4981]: I0227 18:45:59.555188 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:46:00 crc kubenswrapper[4981]: I0227 18:46:00.554736 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:46:01 crc kubenswrapper[4981]: I0227 18:46:01.556494 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:46:01 crc kubenswrapper[4981]: E0227 18:46:01.695173 4981 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 18:46:02 crc kubenswrapper[4981]: I0227 18:46:02.555119 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:46:02 crc kubenswrapper[4981]: I0227 18:46:02.628292 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:46:02 crc kubenswrapper[4981]: I0227 18:46:02.630464 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:02 crc kubenswrapper[4981]: I0227 18:46:02.630585 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:02 crc kubenswrapper[4981]: I0227 18:46:02.630610 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:02 crc kubenswrapper[4981]: I0227 18:46:02.632013 4981 scope.go:117] "RemoveContainer" containerID="62ebbefe9f60cab1b3da6260ca3b106ec4dfa0f623f33ea611acd1ba50cf68fa" Feb 27 18:46:02 crc kubenswrapper[4981]: E0227 18:46:02.632618 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 18:46:03 crc kubenswrapper[4981]: I0227 18:46:03.046490 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:46:03 crc kubenswrapper[4981]: I0227 18:46:03.046756 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:46:03 crc kubenswrapper[4981]: I0227 18:46:03.048990 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:03 crc kubenswrapper[4981]: I0227 18:46:03.049046 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:03 crc kubenswrapper[4981]: I0227 18:46:03.049093 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:03 crc kubenswrapper[4981]: I0227 18:46:03.054297 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:46:03 crc kubenswrapper[4981]: I0227 18:46:03.552276 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:46:03 crc kubenswrapper[4981]: I0227 18:46:03.984643 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:46:03 crc kubenswrapper[4981]: I0227 18:46:03.985544 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:03 crc kubenswrapper[4981]: I0227 18:46:03.985568 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:03 crc kubenswrapper[4981]: I0227 18:46:03.985577 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:04 crc kubenswrapper[4981]: I0227 18:46:04.560364 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:46:05 crc kubenswrapper[4981]: E0227 18:46:05.185612 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 18:46:05 crc kubenswrapper[4981]: I0227 18:46:05.196816 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:46:05 crc kubenswrapper[4981]: I0227 18:46:05.202628 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:05 crc kubenswrapper[4981]: I0227 18:46:05.202678 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:05 crc kubenswrapper[4981]: I0227 18:46:05.202698 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:05 crc kubenswrapper[4981]: I0227 18:46:05.202725 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 18:46:05 crc kubenswrapper[4981]: E0227 18:46:05.213612 4981 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 18:46:05 crc kubenswrapper[4981]: I0227 18:46:05.553902 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:46:06 crc kubenswrapper[4981]: I0227 18:46:06.555670 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:46:07 crc kubenswrapper[4981]: I0227 18:46:07.553365 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:46:08 crc kubenswrapper[4981]: I0227 18:46:08.553009 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:46:09 crc kubenswrapper[4981]: I0227 18:46:09.551824 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:46:10 crc kubenswrapper[4981]: I0227 18:46:10.554385 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:46:11 crc kubenswrapper[4981]: I0227 18:46:11.551753 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:46:11 crc kubenswrapper[4981]: E0227 18:46:11.696596 4981 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 18:46:12 crc kubenswrapper[4981]: E0227 18:46:12.198083 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 18:46:12 crc kubenswrapper[4981]: I0227 18:46:12.214014 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:46:12 crc kubenswrapper[4981]: I0227 18:46:12.216098 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:12 crc kubenswrapper[4981]: I0227 18:46:12.216147 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:12 crc kubenswrapper[4981]: I0227 18:46:12.216168 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:12 crc kubenswrapper[4981]: I0227 18:46:12.216208 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 18:46:12 crc kubenswrapper[4981]: E0227 18:46:12.223186 4981 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 18:46:12 crc kubenswrapper[4981]: I0227 18:46:12.553558 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:46:12 crc kubenswrapper[4981]: I0227 18:46:12.927746 4981 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 18:46:12 crc kubenswrapper[4981]: I0227 18:46:12.947480 4981 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 27 18:46:13 crc kubenswrapper[4981]: I0227 18:46:13.553583 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:46:14 crc kubenswrapper[4981]: I0227 18:46:14.554657 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:46:14 crc kubenswrapper[4981]: W0227 18:46:14.881158 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 27 18:46:14 crc kubenswrapper[4981]: E0227 18:46:14.881229 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 27 18:46:15 crc kubenswrapper[4981]: I0227 18:46:15.554968 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:46:15 crc kubenswrapper[4981]: I0227 18:46:15.628756 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:46:15 crc kubenswrapper[4981]: I0227 18:46:15.630598 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:15 crc kubenswrapper[4981]: I0227 18:46:15.630649 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:15 crc kubenswrapper[4981]: I0227 18:46:15.630668 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:15 crc kubenswrapper[4981]: I0227 18:46:15.631540 4981 scope.go:117] "RemoveContainer" containerID="62ebbefe9f60cab1b3da6260ca3b106ec4dfa0f623f33ea611acd1ba50cf68fa" Feb 27 18:46:16 crc kubenswrapper[4981]: I0227 18:46:16.022892 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 18:46:16 crc kubenswrapper[4981]: I0227 18:46:16.025809 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d"} Feb 27 18:46:16 crc kubenswrapper[4981]: I0227 18:46:16.025976 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:46:16 crc kubenswrapper[4981]: I0227 18:46:16.027493 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:16 crc kubenswrapper[4981]: I0227 18:46:16.027529 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:16 crc kubenswrapper[4981]: I0227 18:46:16.027541 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:16 crc kubenswrapper[4981]: I0227 18:46:16.555712 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 18:46:16 crc kubenswrapper[4981]: I0227 18:46:16.630256 4981 csr.go:261] certificate signing request csr-499x9 is approved, waiting to be issued Feb 27 18:46:16 crc kubenswrapper[4981]: I0227 18:46:16.644544 4981 csr.go:257] certificate signing request csr-499x9 is issued Feb 27 18:46:16 crc kubenswrapper[4981]: I0227 18:46:16.689554 4981 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 27 18:46:17 crc kubenswrapper[4981]: I0227 18:46:17.366251 4981 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 27 18:46:17 crc kubenswrapper[4981]: I0227 18:46:17.647268 4981 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-28 11:48:00.261839239 +0000 UTC Feb 27 18:46:17 crc kubenswrapper[4981]: I0227 18:46:17.647331 4981 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7289h1m42.614514574s for next certificate rotation Feb 27 18:46:18 crc kubenswrapper[4981]: I0227 18:46:18.035352 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 18:46:18 crc kubenswrapper[4981]: I0227 18:46:18.036102 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 18:46:18 crc kubenswrapper[4981]: I0227 18:46:18.038741 4981 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d" exitCode=255 Feb 27 18:46:18 crc kubenswrapper[4981]: I0227 18:46:18.038822 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d"} Feb 27 18:46:18 crc kubenswrapper[4981]: I0227 18:46:18.038910 4981 scope.go:117] "RemoveContainer" containerID="62ebbefe9f60cab1b3da6260ca3b106ec4dfa0f623f33ea611acd1ba50cf68fa" Feb 27 18:46:18 crc kubenswrapper[4981]: I0227 18:46:18.039080 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:46:18 crc kubenswrapper[4981]: I0227 18:46:18.040466 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:18 crc kubenswrapper[4981]: I0227 18:46:18.040520 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:18 crc kubenswrapper[4981]: I0227 18:46:18.040540 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:18 crc kubenswrapper[4981]: I0227 18:46:18.041727 4981 scope.go:117] "RemoveContainer" containerID="8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d" Feb 27 18:46:18 crc kubenswrapper[4981]: E0227 18:46:18.042039 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 18:46:18 crc kubenswrapper[4981]: I0227 18:46:18.077085 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.044727 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.047794 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.048993 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.049036 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.049081 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.049974 4981 scope.go:117] "RemoveContainer" containerID="8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d" Feb 27 18:46:19 crc kubenswrapper[4981]: E0227 18:46:19.050283 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.084043 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.223789 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.225574 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.225627 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.225646 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.225758 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.236025 4981 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.236415 4981 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 27 18:46:19 crc kubenswrapper[4981]: E0227 18:46:19.236451 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.242720 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.242759 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.242776 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.242798 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.242817 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:19Z","lastTransitionTime":"2026-02-27T18:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:19 crc kubenswrapper[4981]: E0227 18:46:19.262265 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.273553 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.273613 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.273637 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.273669 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.273693 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:19Z","lastTransitionTime":"2026-02-27T18:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:19 crc kubenswrapper[4981]: E0227 18:46:19.291154 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.302915 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.302989 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.303016 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.303047 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.303123 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:19Z","lastTransitionTime":"2026-02-27T18:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:19 crc kubenswrapper[4981]: E0227 18:46:19.320941 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.332634 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.332694 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.332713 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.332736 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.332758 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:19Z","lastTransitionTime":"2026-02-27T18:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:19 crc kubenswrapper[4981]: E0227 18:46:19.349567 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:19 crc kubenswrapper[4981]: E0227 18:46:19.349786 4981 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 18:46:19 crc kubenswrapper[4981]: E0227 18:46:19.349824 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:19 crc kubenswrapper[4981]: E0227 18:46:19.450502 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:19 crc kubenswrapper[4981]: I0227 18:46:19.495286 4981 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 27 18:46:19 crc kubenswrapper[4981]: E0227 18:46:19.550676 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:19 crc kubenswrapper[4981]: E0227 18:46:19.651131 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:19 crc kubenswrapper[4981]: E0227 18:46:19.751715 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:19 crc kubenswrapper[4981]: E0227 18:46:19.851878 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:19 crc kubenswrapper[4981]: E0227 18:46:19.953016 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:20 crc kubenswrapper[4981]: I0227 18:46:20.050289 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 18:46:20 crc kubenswrapper[4981]: I0227 18:46:20.051826 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:20 crc kubenswrapper[4981]: I0227 18:46:20.051883 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:20 crc kubenswrapper[4981]: I0227 18:46:20.051906 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:20 crc kubenswrapper[4981]: I0227 18:46:20.052877 4981 scope.go:117] "RemoveContainer" containerID="8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d" Feb 27 18:46:20 crc kubenswrapper[4981]: E0227 18:46:20.053223 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 18:46:20 crc kubenswrapper[4981]: E0227 18:46:20.053259 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:20 crc kubenswrapper[4981]: E0227 18:46:20.153575 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:20 crc kubenswrapper[4981]: E0227 18:46:20.254642 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:20 crc kubenswrapper[4981]: E0227 18:46:20.355362 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:20 crc kubenswrapper[4981]: E0227 18:46:20.456334 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:20 crc kubenswrapper[4981]: E0227 18:46:20.557359 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:20 crc kubenswrapper[4981]: E0227 18:46:20.657677 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:20 crc kubenswrapper[4981]: E0227 18:46:20.758805 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:20 crc kubenswrapper[4981]: E0227 18:46:20.859882 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:20 crc kubenswrapper[4981]: E0227 18:46:20.960321 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:21 crc kubenswrapper[4981]: E0227 18:46:21.061518 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:21 crc kubenswrapper[4981]: E0227 18:46:21.162602 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:21 crc kubenswrapper[4981]: E0227 18:46:21.263471 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:21 crc kubenswrapper[4981]: E0227 18:46:21.363942 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:21 crc kubenswrapper[4981]: E0227 18:46:21.465139 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:21 crc kubenswrapper[4981]: E0227 18:46:21.566240 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:21 crc kubenswrapper[4981]: E0227 18:46:21.666536 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:21 crc kubenswrapper[4981]: E0227 18:46:21.697879 4981 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 18:46:21 crc kubenswrapper[4981]: I0227 18:46:21.761173 4981 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 27 18:46:21 crc kubenswrapper[4981]: E0227 18:46:21.767564 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:21 crc kubenswrapper[4981]: E0227 18:46:21.868311 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:21 crc kubenswrapper[4981]: E0227 18:46:21.969009 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:22 crc kubenswrapper[4981]: E0227 18:46:22.070489 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:22 crc kubenswrapper[4981]: E0227 18:46:22.171346 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:22 crc kubenswrapper[4981]: E0227 18:46:22.272166 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:22 crc kubenswrapper[4981]: E0227 18:46:22.373452 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:22 crc kubenswrapper[4981]: E0227 18:46:22.474751 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:22 crc kubenswrapper[4981]: E0227 18:46:22.575564 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:22 crc kubenswrapper[4981]: E0227 18:46:22.676703 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:22 crc kubenswrapper[4981]: E0227 18:46:22.777879 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:22 crc kubenswrapper[4981]: E0227 18:46:22.878709 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:22 crc kubenswrapper[4981]: E0227 18:46:22.979760 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:23 crc kubenswrapper[4981]: E0227 18:46:23.080122 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:23 crc kubenswrapper[4981]: E0227 18:46:23.180745 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:23 crc kubenswrapper[4981]: E0227 18:46:23.281847 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:23 crc kubenswrapper[4981]: E0227 18:46:23.382614 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:23 crc kubenswrapper[4981]: E0227 18:46:23.483164 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:23 crc kubenswrapper[4981]: E0227 18:46:23.584302 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:23 crc kubenswrapper[4981]: E0227 18:46:23.684637 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:23 crc kubenswrapper[4981]: E0227 18:46:23.785024 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:23 crc kubenswrapper[4981]: E0227 18:46:23.885806 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:23 crc kubenswrapper[4981]: E0227 18:46:23.987155 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:24 crc kubenswrapper[4981]: E0227 18:46:24.087712 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:24 crc kubenswrapper[4981]: E0227 18:46:24.188558 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:24 crc kubenswrapper[4981]: E0227 18:46:24.289151 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:24 crc kubenswrapper[4981]: E0227 18:46:24.390133 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:24 crc kubenswrapper[4981]: E0227 18:46:24.491186 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:24 crc kubenswrapper[4981]: E0227 18:46:24.592493 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:24 crc kubenswrapper[4981]: E0227 18:46:24.692968 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:24 crc kubenswrapper[4981]: E0227 18:46:24.794128 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:24 crc kubenswrapper[4981]: E0227 18:46:24.896603 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:24 crc kubenswrapper[4981]: E0227 18:46:24.997208 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:25 crc kubenswrapper[4981]: E0227 18:46:25.097504 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:25 crc kubenswrapper[4981]: E0227 18:46:25.198551 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:25 crc kubenswrapper[4981]: E0227 18:46:25.299357 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:25 crc kubenswrapper[4981]: E0227 18:46:25.399610 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:25 crc kubenswrapper[4981]: E0227 18:46:25.499771 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:25 crc kubenswrapper[4981]: E0227 18:46:25.600113 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:25 crc kubenswrapper[4981]: E0227 18:46:25.700450 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:25 crc kubenswrapper[4981]: E0227 18:46:25.800828 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:25 crc kubenswrapper[4981]: E0227 18:46:25.900925 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:26 crc kubenswrapper[4981]: E0227 18:46:26.001614 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:26 crc kubenswrapper[4981]: E0227 18:46:26.102556 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:26 crc kubenswrapper[4981]: E0227 18:46:26.202757 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:26 crc kubenswrapper[4981]: E0227 18:46:26.302940 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:26 crc kubenswrapper[4981]: E0227 18:46:26.403435 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:26 crc kubenswrapper[4981]: E0227 18:46:26.503803 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:26 crc kubenswrapper[4981]: E0227 18:46:26.604635 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:26 crc kubenswrapper[4981]: E0227 18:46:26.705038 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:26 crc kubenswrapper[4981]: E0227 18:46:26.805999 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:26 crc kubenswrapper[4981]: E0227 18:46:26.906834 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:27 crc kubenswrapper[4981]: E0227 18:46:27.007596 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:27 crc kubenswrapper[4981]: E0227 18:46:27.108719 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:27 crc kubenswrapper[4981]: E0227 18:46:27.209390 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:27 crc kubenswrapper[4981]: E0227 18:46:27.310503 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:27 crc kubenswrapper[4981]: E0227 18:46:27.410687 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:27 crc kubenswrapper[4981]: E0227 18:46:27.511606 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:27 crc kubenswrapper[4981]: E0227 18:46:27.612461 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:27 crc kubenswrapper[4981]: E0227 18:46:27.712848 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:27 crc kubenswrapper[4981]: E0227 18:46:27.813308 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:27 crc kubenswrapper[4981]: E0227 18:46:27.914099 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:28 crc kubenswrapper[4981]: E0227 18:46:28.015159 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.038689 4981 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.117921 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.117977 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.117997 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.118024 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.118043 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:28Z","lastTransitionTime":"2026-02-27T18:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.221139 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.221194 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.221213 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.221235 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.221254 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:28Z","lastTransitionTime":"2026-02-27T18:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.324884 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.324922 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.324941 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.324963 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.324981 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:28Z","lastTransitionTime":"2026-02-27T18:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.428473 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.428525 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.428542 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.428596 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.428613 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:28Z","lastTransitionTime":"2026-02-27T18:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.532221 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.532291 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.532321 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.532349 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.532372 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:28Z","lastTransitionTime":"2026-02-27T18:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.562506 4981 apiserver.go:52] "Watching apiserver" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.570044 4981 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.570472 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.570976 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.571120 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.571172 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:28 crc kubenswrapper[4981]: E0227 18:46:28.571366 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.571490 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 18:46:28 crc kubenswrapper[4981]: E0227 18:46:28.571756 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.571868 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.572259 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:28 crc kubenswrapper[4981]: E0227 18:46:28.572379 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.573965 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.575687 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.575736 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.576120 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.576264 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.576392 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.576625 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.576683 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.576975 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.614867 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.635405 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.636601 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.636732 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.636753 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.636776 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.636834 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:28Z","lastTransitionTime":"2026-02-27T18:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.650447 4981 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.655262 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670029 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670151 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670221 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670274 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670494 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670532 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670566 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670602 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670634 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670668 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670672 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670703 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670733 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670769 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670842 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670876 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670909 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670942 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670978 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671009 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671044 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671103 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671138 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671171 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671204 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671236 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671268 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671299 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671335 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671415 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671450 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671543 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671579 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671614 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671649 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671684 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671715 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671750 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671785 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671820 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671852 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671925 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671995 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.672026 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.672081 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.672116 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.672148 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.672178 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.672211 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.672241 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.672273 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.672305 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.672365 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.672396 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.672433 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.672465 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670837 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.670864 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.671734 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.672258 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.672281 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.672472 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.672768 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.672499 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.672898 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.672964 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673012 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673107 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673159 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673210 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673259 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673319 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673361 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673378 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673438 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673487 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673538 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673587 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673636 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673688 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673734 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673780 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673830 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673876 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673929 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673981 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.674034 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.674128 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.674181 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.674232 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.674284 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.674329 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673382 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.673392 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.674123 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.674354 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.674580 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.674378 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.681837 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.682008 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.682227 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.682450 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.682610 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.682757 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.682908 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.683050 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.683249 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.683426 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.683570 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.683720 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.683870 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.684017 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.684240 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.684433 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.684621 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.684835 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.685045 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.685964 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.686546 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.686624 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.686690 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.686745 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.686792 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.686843 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.686903 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.686958 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.686998 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687036 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687108 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687147 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687185 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687221 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687258 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687295 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687332 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687366 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687401 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687433 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687470 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687506 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687548 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687584 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687620 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687662 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687695 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687732 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.685091 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687766 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.685213 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.685249 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687805 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.685312 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.685511 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687842 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687880 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687913 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687948 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687985 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.688023 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.688109 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.688149 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.688191 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.688229 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.688264 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.688299 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.688365 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.688402 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.688439 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.688476 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.688513 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.688550 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.688585 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.688623 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.688661 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.688634 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.688706 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.689016 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.689124 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.689196 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.689256 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.689315 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.689374 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.689553 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.689613 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.689670 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.689726 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.689775 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.689828 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.685840 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.686145 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.689888 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.689950 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690008 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690108 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690172 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690246 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690303 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690361 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690415 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690468 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690528 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690581 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690634 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690693 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690751 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690802 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690868 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690921 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690980 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.691035 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.691121 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.691185 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.691240 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.691294 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.691350 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.691412 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.691464 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.691518 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.691622 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.691686 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.691746 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.691808 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.691873 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.691932 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.691987 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692043 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692140 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692204 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692268 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692349 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692421 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692493 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692624 4981 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692659 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692691 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692723 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692759 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692789 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692820 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692851 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692880 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692913 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692942 4981 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692975 4981 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.693006 4981 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.693034 4981 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.693115 4981 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.693148 4981 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.693180 4981 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.693209 4981 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.693247 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.693277 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.693306 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.686536 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.686687 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687186 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687343 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.687463 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.688412 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.688497 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.689077 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.689186 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.689636 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.689903 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690049 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690197 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690689 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690784 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690800 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.690913 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.691164 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.696572 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.696597 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.691378 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692227 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692364 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.692997 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.693105 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.693244 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: E0227 18:46:28.693482 4981 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 18:46:28 crc kubenswrapper[4981]: E0227 18:46:28.696754 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:46:29.196726528 +0000 UTC m=+88.675507718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.696940 4981 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.697025 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.697150 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: E0227 18:46:28.697201 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 18:46:29.197172971 +0000 UTC m=+88.675954171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.693533 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.693728 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.694150 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.694228 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.694365 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.694423 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.694494 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.694648 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.694677 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.695628 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.695800 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.695656 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.696098 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.696220 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.697588 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.698385 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.697464 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.698643 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.699114 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 18:46:28 crc kubenswrapper[4981]: E0227 18:46:28.699265 4981 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.699403 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.699407 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.699790 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: E0227 18:46:28.699820 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 18:46:29.199789474 +0000 UTC m=+88.678570664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.700240 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.703839 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.704200 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.704881 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.708585 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.708826 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.708717 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.709391 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.709395 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.709918 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.709938 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.710025 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.710100 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.710176 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.712884 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.712999 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.713444 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.713954 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.714774 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.718177 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.718654 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.719230 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.719510 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.719664 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.719736 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.719736 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.720416 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.720663 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.720901 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.719479 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:28 crc kubenswrapper[4981]: E0227 18:46:28.722983 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 18:46:28 crc kubenswrapper[4981]: E0227 18:46:28.723043 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 18:46:28 crc kubenswrapper[4981]: E0227 18:46:28.723093 4981 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:28 crc kubenswrapper[4981]: E0227 18:46:28.723177 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 18:46:29.223151248 +0000 UTC m=+88.701932448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.723485 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.723661 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.724531 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.724666 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.724708 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.724874 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.724923 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.725000 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.725078 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.725256 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: E0227 18:46:28.726944 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 18:46:28 crc kubenswrapper[4981]: E0227 18:46:28.726977 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 18:46:28 crc kubenswrapper[4981]: E0227 18:46:28.726991 4981 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:28 crc kubenswrapper[4981]: E0227 18:46:28.727088 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 18:46:29.227027556 +0000 UTC m=+88.705808726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.727147 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.727410 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.727304 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.727534 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.735430 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.735615 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.735671 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.736577 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.736931 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.738311 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.742254 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.742279 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.742430 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.742724 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.742941 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.743023 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.743214 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.743500 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.743732 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.743946 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.743987 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.744005 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.744031 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.749216 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:28Z","lastTransitionTime":"2026-02-27T18:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.744681 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.745617 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.745901 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.746029 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.748742 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.748997 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.749090 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.748710 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.749554 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.749581 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.743986 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.749620 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.749976 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.750029 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.750180 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.750915 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.751954 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.752428 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.753510 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.753903 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.754544 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.754641 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.754903 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.755178 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.755374 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.755601 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.755616 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.755646 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.755840 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.756145 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.756652 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.757403 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.757449 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.757536 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.757959 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.758034 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.758044 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.758396 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.758486 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.758511 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.758557 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.758704 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.759395 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.760219 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.760245 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.760252 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.759345 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.760457 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.761267 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.761660 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.761883 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.761903 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.762129 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.762219 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.762366 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.763126 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.763166 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.763205 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.763224 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.763256 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.766326 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.766340 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.766565 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.766626 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.766721 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.766819 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.767186 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.767383 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.766780 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.767559 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.775197 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.786754 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.787449 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794510 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794576 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794639 4981 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794653 4981 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794654 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794666 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794703 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794717 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794740 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794755 4981 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794768 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794780 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794791 4981 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794803 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794815 4981 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794826 4981 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794840 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794852 4981 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794864 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794875 4981 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794887 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794899 4981 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794910 4981 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794922 4981 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794935 4981 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794947 4981 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794958 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794969 4981 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794980 4981 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.794991 4981 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795005 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795017 4981 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795028 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795041 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795073 4981 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795085 4981 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795097 4981 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795108 4981 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795120 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795132 4981 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795144 4981 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795156 4981 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795169 4981 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795181 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795195 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795208 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795221 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795235 4981 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795248 4981 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795262 4981 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795273 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795286 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795298 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795311 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795323 4981 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795335 4981 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795346 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795357 4981 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795369 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795380 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795391 4981 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795404 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795415 4981 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795426 4981 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795437 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795449 4981 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795460 4981 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795471 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795486 4981 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795499 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795510 4981 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795523 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795533 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795546 4981 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795558 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795570 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795582 4981 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795597 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795609 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795622 4981 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795633 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795645 4981 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795656 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795668 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795679 4981 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795690 4981 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795704 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795715 4981 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795726 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795738 4981 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795749 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795760 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795773 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795786 4981 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795800 4981 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795813 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795827 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795837 4981 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795849 4981 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795860 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795872 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795883 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795896 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795909 4981 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795921 4981 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795932 4981 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795944 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795955 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795966 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795979 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.795990 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796000 4981 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796011 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796023 4981 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796034 4981 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796046 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796072 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796084 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796096 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796107 4981 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796119 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796132 4981 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796144 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796157 4981 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796168 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796179 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796190 4981 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796201 4981 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796213 4981 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796223 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796251 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796262 4981 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796273 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796284 4981 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796296 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796308 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796319 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796331 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796341 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796353 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796364 4981 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796374 4981 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796385 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796396 4981 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796407 4981 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796419 4981 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796429 4981 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796441 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796453 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796464 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796479 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796491 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796503 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796514 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796524 4981 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796535 4981 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796546 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796557 4981 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796568 4981 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796578 4981 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796591 4981 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796602 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796613 4981 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796624 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796635 4981 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796646 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796658 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796669 4981 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796681 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796693 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796705 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796715 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796728 4981 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796739 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.796750 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.799384 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.809345 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.852660 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.852721 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.852740 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.852805 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.852824 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:28Z","lastTransitionTime":"2026-02-27T18:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.897473 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.897669 4981 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.913537 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.929848 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.940707 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.958146 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.958191 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.958208 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.958235 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:28 crc kubenswrapper[4981]: I0227 18:46:28.958310 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:28Z","lastTransitionTime":"2026-02-27T18:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:28 crc kubenswrapper[4981]: W0227 18:46:28.960879 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-b39f4ba2410c9ef1a4b9f6545173c1efcc2c3863e75496cbc8dbc955a2e08678 WatchSource:0}: Error finding container b39f4ba2410c9ef1a4b9f6545173c1efcc2c3863e75496cbc8dbc955a2e08678: Status 404 returned error can't find the container with id b39f4ba2410c9ef1a4b9f6545173c1efcc2c3863e75496cbc8dbc955a2e08678 Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.061712 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.061777 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.061794 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.061819 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.061839 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:29Z","lastTransitionTime":"2026-02-27T18:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.080395 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b39f4ba2410c9ef1a4b9f6545173c1efcc2c3863e75496cbc8dbc955a2e08678"} Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.082192 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8d699cfb192f08cd0279789f5879f07d7759ad560ca88cf9c17783fc492064d4"} Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.084140 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c6ce86f1ecdf1971096720341bdabd5f96ac318fc07f833a5cf57185a8485751"} Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.165652 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.165733 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.165753 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.165779 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.165798 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:29Z","lastTransitionTime":"2026-02-27T18:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.199570 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:46:29 crc kubenswrapper[4981]: E0227 18:46:29.199792 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:46:30.19975659 +0000 UTC m=+89.678537790 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.199924 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:29 crc kubenswrapper[4981]: E0227 18:46:29.200144 4981 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 18:46:29 crc kubenswrapper[4981]: E0227 18:46:29.200227 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 18:46:30.200211603 +0000 UTC m=+89.678992803 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.268361 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.268442 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.268463 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.268492 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.268513 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:29Z","lastTransitionTime":"2026-02-27T18:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.301258 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.301302 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.301324 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:29 crc kubenswrapper[4981]: E0227 18:46:29.301452 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 18:46:29 crc kubenswrapper[4981]: E0227 18:46:29.301469 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 18:46:29 crc kubenswrapper[4981]: E0227 18:46:29.301481 4981 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:29 crc kubenswrapper[4981]: E0227 18:46:29.301487 4981 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 18:46:29 crc kubenswrapper[4981]: E0227 18:46:29.301507 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 18:46:29 crc kubenswrapper[4981]: E0227 18:46:29.301558 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 18:46:29 crc kubenswrapper[4981]: E0227 18:46:29.301581 4981 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:29 crc kubenswrapper[4981]: E0227 18:46:29.301531 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 18:46:30.30151708 +0000 UTC m=+89.780298240 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:29 crc kubenswrapper[4981]: E0227 18:46:29.301672 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 18:46:30.301645613 +0000 UTC m=+89.780426803 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 18:46:29 crc kubenswrapper[4981]: E0227 18:46:29.301697 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 18:46:30.301684724 +0000 UTC m=+89.780465924 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.371979 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.372030 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.372065 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.372085 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.372100 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:29Z","lastTransitionTime":"2026-02-27T18:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.475117 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.475179 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.475198 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.475224 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.475242 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:29Z","lastTransitionTime":"2026-02-27T18:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.578139 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.578205 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.578226 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.578257 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.578279 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:29Z","lastTransitionTime":"2026-02-27T18:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.636101 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.637600 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.640350 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.641928 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.644409 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.645804 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.647715 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.649934 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.651414 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.653690 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.654910 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.657434 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.658775 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.660031 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.662254 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.663535 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.665717 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.667007 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.668456 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.670678 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.671897 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.674262 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.675360 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.677822 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.678836 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.680433 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.682123 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.682572 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.682638 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.682658 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.682683 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.682702 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:29Z","lastTransitionTime":"2026-02-27T18:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.683185 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.684658 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.685402 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.686690 4981 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.686930 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.689443 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.690029 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.690103 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.690122 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.690147 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.690164 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:29Z","lastTransitionTime":"2026-02-27T18:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.691334 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.692041 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.694275 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.695678 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.697078 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.698326 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.700454 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.701363 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.702752 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.703782 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.705290 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: E0227 18:46:29.705653 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.706282 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.707652 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.708571 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.710299 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.711117 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.712343 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.713208 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.714643 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.715669 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.716576 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.723773 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.723869 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.723930 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.724008 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.724270 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:29Z","lastTransitionTime":"2026-02-27T18:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:29 crc kubenswrapper[4981]: E0227 18:46:29.752487 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.764390 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.764452 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.764472 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.764498 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.764521 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:29Z","lastTransitionTime":"2026-02-27T18:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:29 crc kubenswrapper[4981]: E0227 18:46:29.778095 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.783708 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.783790 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.783821 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.783861 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.783891 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:29Z","lastTransitionTime":"2026-02-27T18:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:29 crc kubenswrapper[4981]: E0227 18:46:29.820851 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.825206 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.825304 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.825384 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.825469 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.825536 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:29Z","lastTransitionTime":"2026-02-27T18:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:29 crc kubenswrapper[4981]: E0227 18:46:29.835677 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:29 crc kubenswrapper[4981]: E0227 18:46:29.835800 4981 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.837328 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.837414 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.837487 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.837551 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.837617 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:29Z","lastTransitionTime":"2026-02-27T18:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.940242 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.940271 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.940279 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.940295 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:29 crc kubenswrapper[4981]: I0227 18:46:29.940306 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:29Z","lastTransitionTime":"2026-02-27T18:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.042969 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.043018 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.043035 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.043087 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.043104 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:30Z","lastTransitionTime":"2026-02-27T18:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.089467 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454"} Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.089528 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490"} Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.095709 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335"} Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.105767 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.125049 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.139822 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.145735 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.145973 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.146308 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.146468 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.146686 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:30Z","lastTransitionTime":"2026-02-27T18:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.149868 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.163719 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.178142 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.191860 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.206010 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.209483 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.209603 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:30 crc kubenswrapper[4981]: E0227 18:46:30.209740 4981 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 18:46:30 crc kubenswrapper[4981]: E0227 18:46:30.209810 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 18:46:32.209792038 +0000 UTC m=+91.688573198 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 18:46:30 crc kubenswrapper[4981]: E0227 18:46:30.210188 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:46:32.210133146 +0000 UTC m=+91.688914346 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.219981 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.234174 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.249980 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.250029 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.250079 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.250107 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.250127 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:30Z","lastTransitionTime":"2026-02-27T18:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.252130 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.267827 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.310770 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.310832 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.310872 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:30 crc kubenswrapper[4981]: E0227 18:46:30.311101 4981 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 18:46:30 crc kubenswrapper[4981]: E0227 18:46:30.311239 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 18:46:32.311208546 +0000 UTC m=+91.789989736 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 18:46:30 crc kubenswrapper[4981]: E0227 18:46:30.311252 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 18:46:30 crc kubenswrapper[4981]: E0227 18:46:30.311287 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 18:46:30 crc kubenswrapper[4981]: E0227 18:46:30.311120 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 18:46:30 crc kubenswrapper[4981]: E0227 18:46:30.311309 4981 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:30 crc kubenswrapper[4981]: E0227 18:46:30.311360 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 18:46:30 crc kubenswrapper[4981]: E0227 18:46:30.311392 4981 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:30 crc kubenswrapper[4981]: E0227 18:46:30.311401 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 18:46:32.311378271 +0000 UTC m=+91.790159461 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:30 crc kubenswrapper[4981]: E0227 18:46:30.311466 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 18:46:32.311442173 +0000 UTC m=+91.790223363 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.353392 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.353441 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.353455 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.353476 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.353493 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:30Z","lastTransitionTime":"2026-02-27T18:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.457305 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.457353 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.457371 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.457394 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.457412 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:30Z","lastTransitionTime":"2026-02-27T18:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.560767 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.560850 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.560870 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.560900 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.560924 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:30Z","lastTransitionTime":"2026-02-27T18:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.628558 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.628599 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.628631 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:30 crc kubenswrapper[4981]: E0227 18:46:30.628752 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:46:30 crc kubenswrapper[4981]: E0227 18:46:30.628918 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:46:30 crc kubenswrapper[4981]: E0227 18:46:30.629028 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.663714 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.663775 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.663798 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.663824 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.663845 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:30Z","lastTransitionTime":"2026-02-27T18:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.766578 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.766634 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.766652 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.766677 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.766695 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:30Z","lastTransitionTime":"2026-02-27T18:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.869911 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.869964 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.869976 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.869997 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.870013 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:30Z","lastTransitionTime":"2026-02-27T18:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.972840 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.972913 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.972932 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.972961 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:30 crc kubenswrapper[4981]: I0227 18:46:30.972987 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:30Z","lastTransitionTime":"2026-02-27T18:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.076576 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.076653 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.076672 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.076704 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.076726 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:31Z","lastTransitionTime":"2026-02-27T18:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.179801 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.179865 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.179896 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.179925 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.179947 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:31Z","lastTransitionTime":"2026-02-27T18:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.284196 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.284269 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.284288 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.284314 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.284333 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:31Z","lastTransitionTime":"2026-02-27T18:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.387278 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.387331 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.387349 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.387372 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.387392 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:31Z","lastTransitionTime":"2026-02-27T18:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.491440 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.491500 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.491518 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.491542 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.491560 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:31Z","lastTransitionTime":"2026-02-27T18:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.594164 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.594241 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.594259 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.594286 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.594307 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:31Z","lastTransitionTime":"2026-02-27T18:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.647680 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.648834 4981 scope.go:117] "RemoveContainer" containerID="8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d" Feb 27 18:46:31 crc kubenswrapper[4981]: E0227 18:46:31.649155 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.655275 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.678160 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.696545 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.696585 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.696596 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.696616 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.696628 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:31Z","lastTransitionTime":"2026-02-27T18:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.703553 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.724466 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.750504 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.772517 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.798898 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.798929 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.798938 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.798951 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.798961 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:31Z","lastTransitionTime":"2026-02-27T18:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.901453 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.901497 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.901508 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.901522 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:31 crc kubenswrapper[4981]: I0227 18:46:31.901532 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:31Z","lastTransitionTime":"2026-02-27T18:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.006600 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.006666 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.006690 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.006714 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.006733 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:32Z","lastTransitionTime":"2026-02-27T18:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.101923 4981 scope.go:117] "RemoveContainer" containerID="8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d" Feb 27 18:46:32 crc kubenswrapper[4981]: E0227 18:46:32.102097 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.110125 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.110154 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.110162 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.110174 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.110184 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:32Z","lastTransitionTime":"2026-02-27T18:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.213654 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.213730 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.213745 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.213769 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.213781 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:32Z","lastTransitionTime":"2026-02-27T18:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.230296 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.230441 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:32 crc kubenswrapper[4981]: E0227 18:46:32.230604 4981 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 18:46:32 crc kubenswrapper[4981]: E0227 18:46:32.230683 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 18:46:36.230662183 +0000 UTC m=+95.709443373 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 18:46:32 crc kubenswrapper[4981]: E0227 18:46:32.231162 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:46:36.231145387 +0000 UTC m=+95.709926577 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.317323 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.317371 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.317391 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.317414 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.317433 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:32Z","lastTransitionTime":"2026-02-27T18:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.330960 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.331023 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.331092 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:32 crc kubenswrapper[4981]: E0227 18:46:32.331222 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 18:46:32 crc kubenswrapper[4981]: E0227 18:46:32.331252 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 18:46:32 crc kubenswrapper[4981]: E0227 18:46:32.331262 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 18:46:32 crc kubenswrapper[4981]: E0227 18:46:32.331279 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 18:46:32 crc kubenswrapper[4981]: E0227 18:46:32.331272 4981 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 18:46:32 crc kubenswrapper[4981]: E0227 18:46:32.331420 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 18:46:36.331390993 +0000 UTC m=+95.810172383 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 18:46:32 crc kubenswrapper[4981]: E0227 18:46:32.331291 4981 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:32 crc kubenswrapper[4981]: E0227 18:46:32.331298 4981 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:32 crc kubenswrapper[4981]: E0227 18:46:32.331520 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 18:46:36.331493586 +0000 UTC m=+95.810274786 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:32 crc kubenswrapper[4981]: E0227 18:46:32.331679 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 18:46:36.33163368 +0000 UTC m=+95.810414880 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.422531 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.422588 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.422607 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.422633 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.422652 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:32Z","lastTransitionTime":"2026-02-27T18:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.526544 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.526641 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.526660 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.526696 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.526716 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:32Z","lastTransitionTime":"2026-02-27T18:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.627755 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.627780 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:32 crc kubenswrapper[4981]: E0227 18:46:32.627914 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.628340 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:32 crc kubenswrapper[4981]: E0227 18:46:32.628504 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:46:32 crc kubenswrapper[4981]: E0227 18:46:32.628619 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.628886 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.628972 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.629002 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.629035 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.629093 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:32Z","lastTransitionTime":"2026-02-27T18:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.731628 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.731667 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.731685 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.731713 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.731732 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:32Z","lastTransitionTime":"2026-02-27T18:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.835504 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.835545 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.835562 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.835583 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.835602 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:32Z","lastTransitionTime":"2026-02-27T18:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.943176 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.943237 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.943255 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.943281 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:32 crc kubenswrapper[4981]: I0227 18:46:32.943299 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:32Z","lastTransitionTime":"2026-02-27T18:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.047159 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.047223 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.047241 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.047267 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.047288 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:33Z","lastTransitionTime":"2026-02-27T18:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.106984 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26"} Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.130542 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:33Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.150157 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.150224 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.150244 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.150271 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.150296 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:33Z","lastTransitionTime":"2026-02-27T18:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.151879 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:33Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.172439 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:33Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.191619 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:33Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.209697 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:33Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.229003 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:33Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.248787 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:33Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.256788 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.256859 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.256878 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.256906 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.256926 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:33Z","lastTransitionTime":"2026-02-27T18:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.359970 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.360092 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.360114 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.360136 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.360153 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:33Z","lastTransitionTime":"2026-02-27T18:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.462642 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.462743 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.462762 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.462787 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.462808 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:33Z","lastTransitionTime":"2026-02-27T18:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.565251 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.565292 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.565308 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.565324 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.565337 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:33Z","lastTransitionTime":"2026-02-27T18:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.638338 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.668100 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.668129 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.668139 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.668151 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.668159 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:33Z","lastTransitionTime":"2026-02-27T18:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.770464 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.770531 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.770548 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.770573 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.770594 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:33Z","lastTransitionTime":"2026-02-27T18:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.873339 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.873378 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.873399 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.873414 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.873426 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:33Z","lastTransitionTime":"2026-02-27T18:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.977925 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.977955 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.977963 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.977974 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:33 crc kubenswrapper[4981]: I0227 18:46:33.977982 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:33Z","lastTransitionTime":"2026-02-27T18:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.080724 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.080817 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.080872 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.080896 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.080914 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:34Z","lastTransitionTime":"2026-02-27T18:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.183332 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.183393 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.183411 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.183433 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.183456 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:34Z","lastTransitionTime":"2026-02-27T18:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.286119 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.286177 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.286195 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.286220 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.286246 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:34Z","lastTransitionTime":"2026-02-27T18:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.389416 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.389481 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.389505 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.389534 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.389557 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:34Z","lastTransitionTime":"2026-02-27T18:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.492527 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.492589 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.492601 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.492617 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.492629 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:34Z","lastTransitionTime":"2026-02-27T18:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.600246 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.600311 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.600329 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.600355 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.600375 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:34Z","lastTransitionTime":"2026-02-27T18:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.627951 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.628252 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:34 crc kubenswrapper[4981]: E0227 18:46:34.628195 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.628272 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:34 crc kubenswrapper[4981]: E0227 18:46:34.628441 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:46:34 crc kubenswrapper[4981]: E0227 18:46:34.628500 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.703375 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.703431 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.703450 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.703474 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.703502 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:34Z","lastTransitionTime":"2026-02-27T18:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.806502 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.806602 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.806630 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.806666 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.806692 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:34Z","lastTransitionTime":"2026-02-27T18:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.909413 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.909488 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.909509 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.909542 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:34 crc kubenswrapper[4981]: I0227 18:46:34.909561 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:34Z","lastTransitionTime":"2026-02-27T18:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.012782 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.012812 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.012824 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.012837 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.012848 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:35Z","lastTransitionTime":"2026-02-27T18:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.115281 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.115371 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.115397 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.115429 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.115449 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:35Z","lastTransitionTime":"2026-02-27T18:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.218207 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.218264 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.218286 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.218316 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.218338 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:35Z","lastTransitionTime":"2026-02-27T18:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.321007 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.321070 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.321083 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.321100 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.321115 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:35Z","lastTransitionTime":"2026-02-27T18:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.424004 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.424087 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.424109 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.424133 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.424152 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:35Z","lastTransitionTime":"2026-02-27T18:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.527190 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.527250 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.527275 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.527305 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.527328 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:35Z","lastTransitionTime":"2026-02-27T18:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.630142 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.630230 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.630287 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.630326 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.630352 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:35Z","lastTransitionTime":"2026-02-27T18:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.734868 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.734955 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.734976 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.735006 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.735042 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:35Z","lastTransitionTime":"2026-02-27T18:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.843405 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.843458 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.843476 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.843499 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.843514 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:35Z","lastTransitionTime":"2026-02-27T18:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.946567 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.946636 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.946654 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.946681 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:35 crc kubenswrapper[4981]: I0227 18:46:35.946701 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:35Z","lastTransitionTime":"2026-02-27T18:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.050262 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.050321 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.050339 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.050362 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.050382 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:36Z","lastTransitionTime":"2026-02-27T18:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.153650 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.153717 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.153730 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.153752 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.153768 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:36Z","lastTransitionTime":"2026-02-27T18:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.256611 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.256672 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.256692 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.256718 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.256742 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:36Z","lastTransitionTime":"2026-02-27T18:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.268133 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.268257 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:36 crc kubenswrapper[4981]: E0227 18:46:36.268352 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:46:44.268307129 +0000 UTC m=+103.747088329 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:46:36 crc kubenswrapper[4981]: E0227 18:46:36.268503 4981 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 18:46:36 crc kubenswrapper[4981]: E0227 18:46:36.268617 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 18:46:44.268590437 +0000 UTC m=+103.747371637 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.359308 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.359383 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.359400 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.359429 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.359452 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:36Z","lastTransitionTime":"2026-02-27T18:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.369856 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.369946 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.369987 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:36 crc kubenswrapper[4981]: E0227 18:46:36.370192 4981 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 18:46:36 crc kubenswrapper[4981]: E0227 18:46:36.370232 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 18:46:36 crc kubenswrapper[4981]: E0227 18:46:36.370261 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 18:46:36 crc kubenswrapper[4981]: E0227 18:46:36.370283 4981 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:36 crc kubenswrapper[4981]: E0227 18:46:36.370295 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 18:46:44.370268983 +0000 UTC m=+103.849050173 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 18:46:36 crc kubenswrapper[4981]: E0227 18:46:36.370309 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 18:46:36 crc kubenswrapper[4981]: E0227 18:46:36.370360 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 18:46:44.370337025 +0000 UTC m=+103.849118215 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:36 crc kubenswrapper[4981]: E0227 18:46:36.370381 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 18:46:36 crc kubenswrapper[4981]: E0227 18:46:36.370408 4981 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:36 crc kubenswrapper[4981]: E0227 18:46:36.370567 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 18:46:44.37050468 +0000 UTC m=+103.849285880 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.462638 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.462723 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.462750 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.462783 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.462809 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:36Z","lastTransitionTime":"2026-02-27T18:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.566377 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.566462 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.566487 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.566516 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.566537 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:36Z","lastTransitionTime":"2026-02-27T18:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.627980 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.628016 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.628114 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:36 crc kubenswrapper[4981]: E0227 18:46:36.628214 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:46:36 crc kubenswrapper[4981]: E0227 18:46:36.628366 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:46:36 crc kubenswrapper[4981]: E0227 18:46:36.628503 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.672005 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.672080 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.672093 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.672115 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.672129 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:36Z","lastTransitionTime":"2026-02-27T18:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.774238 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.774316 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.774338 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.774370 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.774405 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:36Z","lastTransitionTime":"2026-02-27T18:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.878626 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.878701 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.878721 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.878751 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.878775 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:36Z","lastTransitionTime":"2026-02-27T18:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.983132 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.983188 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.983204 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.983227 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:36 crc kubenswrapper[4981]: I0227 18:46:36.983242 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:36Z","lastTransitionTime":"2026-02-27T18:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.086212 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.086274 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.086294 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.086321 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.086345 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:37Z","lastTransitionTime":"2026-02-27T18:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.189277 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.189331 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.189351 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.189373 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.189392 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:37Z","lastTransitionTime":"2026-02-27T18:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.291789 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.291863 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.291882 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.291908 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.291932 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:37Z","lastTransitionTime":"2026-02-27T18:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.394998 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.395104 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.395126 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.395156 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.395176 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:37Z","lastTransitionTime":"2026-02-27T18:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.498342 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.498416 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.498433 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.498462 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.498482 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:37Z","lastTransitionTime":"2026-02-27T18:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.600840 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.600920 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.600946 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.600986 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.601014 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:37Z","lastTransitionTime":"2026-02-27T18:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.703963 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.704025 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.704047 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.704103 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.704123 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:37Z","lastTransitionTime":"2026-02-27T18:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.806836 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.806917 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.806942 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.806975 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.807004 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:37Z","lastTransitionTime":"2026-02-27T18:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.909270 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.909325 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.909340 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.909361 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:37 crc kubenswrapper[4981]: I0227 18:46:37.909375 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:37Z","lastTransitionTime":"2026-02-27T18:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.012502 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.012550 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.012563 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.012583 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.012597 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:38Z","lastTransitionTime":"2026-02-27T18:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.118851 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.118956 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.118984 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.119021 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.119049 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:38Z","lastTransitionTime":"2026-02-27T18:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.222227 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.222283 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.222303 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.222337 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.222362 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:38Z","lastTransitionTime":"2026-02-27T18:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.324599 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.324652 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.324679 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.324708 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.324726 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:38Z","lastTransitionTime":"2026-02-27T18:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.427638 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.427712 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.427737 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.427770 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.427793 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:38Z","lastTransitionTime":"2026-02-27T18:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.530274 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.530343 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.530362 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.530392 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.530415 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:38Z","lastTransitionTime":"2026-02-27T18:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.628402 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.628422 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.628507 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:38 crc kubenswrapper[4981]: E0227 18:46:38.628542 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:46:38 crc kubenswrapper[4981]: E0227 18:46:38.628687 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:46:38 crc kubenswrapper[4981]: E0227 18:46:38.628824 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.633744 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.633803 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.633823 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.633848 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.633867 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:38Z","lastTransitionTime":"2026-02-27T18:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.737183 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.737255 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.737275 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.737303 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.737323 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:38Z","lastTransitionTime":"2026-02-27T18:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.840439 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.840508 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.840527 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.840552 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.840571 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:38Z","lastTransitionTime":"2026-02-27T18:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.943348 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.943414 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.943430 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.943669 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:38 crc kubenswrapper[4981]: I0227 18:46:38.943684 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:38Z","lastTransitionTime":"2026-02-27T18:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.047642 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.047703 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.047726 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.047752 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.047770 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:39Z","lastTransitionTime":"2026-02-27T18:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.150403 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.150478 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.150505 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.150545 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.150573 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:39Z","lastTransitionTime":"2026-02-27T18:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.254043 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.254125 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.254143 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.254168 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.254186 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:39Z","lastTransitionTime":"2026-02-27T18:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.357354 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.357465 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.357485 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.357555 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.357574 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:39Z","lastTransitionTime":"2026-02-27T18:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.460246 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.460296 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.460310 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.460328 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.460344 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:39Z","lastTransitionTime":"2026-02-27T18:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.563844 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.563910 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.563929 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.563966 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.563986 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:39Z","lastTransitionTime":"2026-02-27T18:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.667275 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.667369 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.667389 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.667421 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.667445 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:39Z","lastTransitionTime":"2026-02-27T18:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.770708 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.770795 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.770814 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.770845 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.770868 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:39Z","lastTransitionTime":"2026-02-27T18:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.874168 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.874235 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.874259 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.874290 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.874313 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:39Z","lastTransitionTime":"2026-02-27T18:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.978374 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.978443 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.978463 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.978493 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:39 crc kubenswrapper[4981]: I0227 18:46:39.978513 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:39Z","lastTransitionTime":"2026-02-27T18:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.081416 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.081467 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.081486 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.081510 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.081529 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:40Z","lastTransitionTime":"2026-02-27T18:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.185421 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.185503 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.185522 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.185554 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.185579 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:40Z","lastTransitionTime":"2026-02-27T18:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.229820 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.229888 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.229911 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.229940 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.229961 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:40Z","lastTransitionTime":"2026-02-27T18:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:40 crc kubenswrapper[4981]: E0227 18:46:40.252657 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:40Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.259004 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.259116 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.259136 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.259165 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.259184 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:40Z","lastTransitionTime":"2026-02-27T18:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:40 crc kubenswrapper[4981]: E0227 18:46:40.279229 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:40Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.284129 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.284181 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.284201 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.284225 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.284249 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:40Z","lastTransitionTime":"2026-02-27T18:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:40 crc kubenswrapper[4981]: E0227 18:46:40.303251 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:40Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.307567 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.307614 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.307631 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.307652 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.307672 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:40Z","lastTransitionTime":"2026-02-27T18:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:40 crc kubenswrapper[4981]: E0227 18:46:40.326884 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:40Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.332776 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.332838 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.332855 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.332879 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.332898 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:40Z","lastTransitionTime":"2026-02-27T18:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:40 crc kubenswrapper[4981]: E0227 18:46:40.358243 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:40Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:40 crc kubenswrapper[4981]: E0227 18:46:40.358495 4981 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.360439 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.360492 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.360510 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.360536 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.360557 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:40Z","lastTransitionTime":"2026-02-27T18:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.463638 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.463796 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.463818 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.463905 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.464372 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:40Z","lastTransitionTime":"2026-02-27T18:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.568450 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.568499 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.568520 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.568547 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.568569 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:40Z","lastTransitionTime":"2026-02-27T18:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.627543 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.627609 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.627543 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:40 crc kubenswrapper[4981]: E0227 18:46:40.627773 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:46:40 crc kubenswrapper[4981]: E0227 18:46:40.627849 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:46:40 crc kubenswrapper[4981]: E0227 18:46:40.628005 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.672042 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.672161 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.672186 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.672220 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.672246 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:40Z","lastTransitionTime":"2026-02-27T18:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.775079 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.775149 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.775168 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.775194 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.775246 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:40Z","lastTransitionTime":"2026-02-27T18:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.877871 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.877930 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.877949 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.877972 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.877988 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:40Z","lastTransitionTime":"2026-02-27T18:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.980414 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.980460 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.980477 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.980498 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:40 crc kubenswrapper[4981]: I0227 18:46:40.980515 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:40Z","lastTransitionTime":"2026-02-27T18:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.082926 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.082980 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.083000 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.083029 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.083083 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:41Z","lastTransitionTime":"2026-02-27T18:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.186527 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.186594 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.186615 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.186639 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.186658 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:41Z","lastTransitionTime":"2026-02-27T18:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.289048 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.289135 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.289161 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.289191 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.289215 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:41Z","lastTransitionTime":"2026-02-27T18:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.392189 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.392250 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.392268 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.392295 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.392313 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:41Z","lastTransitionTime":"2026-02-27T18:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.495410 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.495459 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.495476 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.495497 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.495514 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:41Z","lastTransitionTime":"2026-02-27T18:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.598906 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.598980 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.598999 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.599026 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.599048 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:41Z","lastTransitionTime":"2026-02-27T18:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.651641 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:41Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.671716 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:41Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.693605 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:41Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.702944 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.702998 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.703018 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.703049 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.703104 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:41Z","lastTransitionTime":"2026-02-27T18:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.715897 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:41Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.734157 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:41Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.753707 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:41Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.774820 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:41Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.795004 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:41Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.806780 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.806851 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.806873 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.806904 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.806930 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:41Z","lastTransitionTime":"2026-02-27T18:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.910258 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.910342 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.910363 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.910401 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:41 crc kubenswrapper[4981]: I0227 18:46:41.910424 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:41Z","lastTransitionTime":"2026-02-27T18:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.013660 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.013755 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.013780 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.013812 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.013833 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:42Z","lastTransitionTime":"2026-02-27T18:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.117434 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.117498 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.117518 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.117546 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.117566 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:42Z","lastTransitionTime":"2026-02-27T18:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.221391 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.221459 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.221477 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.221503 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.221524 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:42Z","lastTransitionTime":"2026-02-27T18:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.325032 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.325150 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.325172 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.325202 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.325222 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:42Z","lastTransitionTime":"2026-02-27T18:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.429172 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.429235 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.429251 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.429279 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.429298 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:42Z","lastTransitionTime":"2026-02-27T18:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.533114 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.533182 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.533199 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.533226 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.533244 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:42Z","lastTransitionTime":"2026-02-27T18:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.627852 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.627905 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.627916 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:42 crc kubenswrapper[4981]: E0227 18:46:42.628028 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:46:42 crc kubenswrapper[4981]: E0227 18:46:42.628221 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:46:42 crc kubenswrapper[4981]: E0227 18:46:42.628587 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.636930 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.637007 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.637030 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.637104 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.637141 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:42Z","lastTransitionTime":"2026-02-27T18:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.739708 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.739773 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.739791 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.739822 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.739846 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:42Z","lastTransitionTime":"2026-02-27T18:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.843190 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.843254 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.843272 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.843297 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.843316 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:42Z","lastTransitionTime":"2026-02-27T18:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.946556 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.946611 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.946628 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.946649 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:42 crc kubenswrapper[4981]: I0227 18:46:42.946669 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:42Z","lastTransitionTime":"2026-02-27T18:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.050347 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.050390 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.050407 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.050428 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.050444 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:43Z","lastTransitionTime":"2026-02-27T18:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.152898 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.152946 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.152965 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.152985 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.153002 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:43Z","lastTransitionTime":"2026-02-27T18:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.255965 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.256304 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.256358 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.256386 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.256407 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:43Z","lastTransitionTime":"2026-02-27T18:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.359470 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.359534 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.359552 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.359578 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.359598 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:43Z","lastTransitionTime":"2026-02-27T18:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.462314 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.462369 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.462388 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.462414 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.462433 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:43Z","lastTransitionTime":"2026-02-27T18:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.565914 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.565977 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.565998 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.566023 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.566042 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:43Z","lastTransitionTime":"2026-02-27T18:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.670093 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.670144 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.670160 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.670183 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.670201 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:43Z","lastTransitionTime":"2026-02-27T18:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.773200 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.773266 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.773284 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.773314 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.773334 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:43Z","lastTransitionTime":"2026-02-27T18:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.876942 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.877010 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.877034 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.877167 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.877259 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:43Z","lastTransitionTime":"2026-02-27T18:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.979899 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.979965 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.979988 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.980018 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:43 crc kubenswrapper[4981]: I0227 18:46:43.980038 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:43Z","lastTransitionTime":"2026-02-27T18:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.084024 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.084125 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.084144 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.084170 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.084187 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:44Z","lastTransitionTime":"2026-02-27T18:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.188266 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.188332 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.188352 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.188379 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.188399 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:44Z","lastTransitionTime":"2026-02-27T18:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.291877 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.291949 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.291967 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.291992 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.292010 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:44Z","lastTransitionTime":"2026-02-27T18:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.351029 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.351153 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:44 crc kubenswrapper[4981]: E0227 18:46:44.351349 4981 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 18:46:44 crc kubenswrapper[4981]: E0227 18:46:44.351454 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:00.35140136 +0000 UTC m=+119.830182560 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:46:44 crc kubenswrapper[4981]: E0227 18:46:44.351554 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 18:47:00.351522823 +0000 UTC m=+119.830304063 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.395521 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.395584 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.395601 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.395627 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.395645 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:44Z","lastTransitionTime":"2026-02-27T18:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.451816 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.451874 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.451934 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:44 crc kubenswrapper[4981]: E0227 18:46:44.452028 4981 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 18:46:44 crc kubenswrapper[4981]: E0227 18:46:44.452163 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 18:46:44 crc kubenswrapper[4981]: E0227 18:46:44.452177 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 18:47:00.45214737 +0000 UTC m=+119.930928560 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 18:46:44 crc kubenswrapper[4981]: E0227 18:46:44.452190 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 18:46:44 crc kubenswrapper[4981]: E0227 18:46:44.452210 4981 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:44 crc kubenswrapper[4981]: E0227 18:46:44.452280 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 18:47:00.452258054 +0000 UTC m=+119.931039254 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:44 crc kubenswrapper[4981]: E0227 18:46:44.452284 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 18:46:44 crc kubenswrapper[4981]: E0227 18:46:44.452330 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 18:46:44 crc kubenswrapper[4981]: E0227 18:46:44.452389 4981 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:44 crc kubenswrapper[4981]: E0227 18:46:44.452512 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 18:47:00.45248734 +0000 UTC m=+119.931268530 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.499379 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.499448 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.499468 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.499494 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.499515 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:44Z","lastTransitionTime":"2026-02-27T18:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.602811 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.602877 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.602895 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.602922 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.602943 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:44Z","lastTransitionTime":"2026-02-27T18:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.627631 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.627692 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:44 crc kubenswrapper[4981]: E0227 18:46:44.627819 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.627903 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:44 crc kubenswrapper[4981]: E0227 18:46:44.628101 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:46:44 crc kubenswrapper[4981]: E0227 18:46:44.628595 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.649343 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.706276 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.706344 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.706364 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.706388 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.706408 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:44Z","lastTransitionTime":"2026-02-27T18:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.809686 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.809750 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.809768 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.809795 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.809814 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:44Z","lastTransitionTime":"2026-02-27T18:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.912794 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.912862 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.912879 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.912905 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:44 crc kubenswrapper[4981]: I0227 18:46:44.912924 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:44Z","lastTransitionTime":"2026-02-27T18:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.016484 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.016541 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.016558 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.016583 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.016600 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:45Z","lastTransitionTime":"2026-02-27T18:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.119299 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.119627 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.119776 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.119926 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.120088 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:45Z","lastTransitionTime":"2026-02-27T18:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.226846 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.226905 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.226923 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.226946 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.226965 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:45Z","lastTransitionTime":"2026-02-27T18:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.329923 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.329990 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.330008 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.330033 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.330079 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:45Z","lastTransitionTime":"2026-02-27T18:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.432838 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.432882 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.432899 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.432922 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.432939 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:45Z","lastTransitionTime":"2026-02-27T18:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.535451 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.535517 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.535537 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.535562 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.535580 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:45Z","lastTransitionTime":"2026-02-27T18:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.638847 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.638899 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.638918 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.638941 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.638958 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:45Z","lastTransitionTime":"2026-02-27T18:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.741899 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.741959 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.741976 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.742004 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.742024 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:45Z","lastTransitionTime":"2026-02-27T18:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.845681 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.845764 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.845785 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.845811 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.845834 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:45Z","lastTransitionTime":"2026-02-27T18:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.948353 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.948404 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.948421 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.948446 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:45 crc kubenswrapper[4981]: I0227 18:46:45.948463 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:45Z","lastTransitionTime":"2026-02-27T18:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.050485 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.050557 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.050575 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.050600 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.050621 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:46Z","lastTransitionTime":"2026-02-27T18:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.153049 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.153161 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.153187 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.153214 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.153232 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:46Z","lastTransitionTime":"2026-02-27T18:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.255985 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.256049 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.256100 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.256125 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.256143 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:46Z","lastTransitionTime":"2026-02-27T18:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.359822 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.359871 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.359887 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.359908 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.359924 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:46Z","lastTransitionTime":"2026-02-27T18:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.463121 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.463170 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.463188 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.463208 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.463224 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:46Z","lastTransitionTime":"2026-02-27T18:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.566851 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.566929 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.566956 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.566989 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.567014 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:46Z","lastTransitionTime":"2026-02-27T18:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.628989 4981 scope.go:117] "RemoveContainer" containerID="8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.629432 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:46 crc kubenswrapper[4981]: E0227 18:46:46.629544 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.629630 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:46 crc kubenswrapper[4981]: E0227 18:46:46.629712 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.629759 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:46 crc kubenswrapper[4981]: E0227 18:46:46.629957 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:46:46 crc kubenswrapper[4981]: E0227 18:46:46.630104 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.670697 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.670774 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.670795 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.670824 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.670844 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:46Z","lastTransitionTime":"2026-02-27T18:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.773387 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.773489 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.773524 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.773558 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.773582 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:46Z","lastTransitionTime":"2026-02-27T18:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.881033 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.881147 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.881167 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.881199 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.881220 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:46Z","lastTransitionTime":"2026-02-27T18:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.984526 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.984598 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.984621 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.984647 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:46 crc kubenswrapper[4981]: I0227 18:46:46.984666 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:46Z","lastTransitionTime":"2026-02-27T18:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.087991 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.088078 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.088097 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.088127 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.088144 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:47Z","lastTransitionTime":"2026-02-27T18:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.191278 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.191677 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.191810 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.191954 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.192120 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:47Z","lastTransitionTime":"2026-02-27T18:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.295551 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.295618 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.295637 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.295662 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.295685 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:47Z","lastTransitionTime":"2026-02-27T18:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.399357 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.399417 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.399436 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.399468 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.399488 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:47Z","lastTransitionTime":"2026-02-27T18:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.503464 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.503538 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.503557 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.503586 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.503605 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:47Z","lastTransitionTime":"2026-02-27T18:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.606991 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.607090 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.607114 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.607144 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.607162 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:47Z","lastTransitionTime":"2026-02-27T18:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.710641 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.710713 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.710732 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.710760 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.710781 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:47Z","lastTransitionTime":"2026-02-27T18:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.814617 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.814699 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.814718 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.814747 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.814765 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:47Z","lastTransitionTime":"2026-02-27T18:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.918144 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.918240 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.918259 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.918282 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:47 crc kubenswrapper[4981]: I0227 18:46:47.918300 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:47Z","lastTransitionTime":"2026-02-27T18:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.021654 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.021701 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.021718 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.021739 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.021757 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:48Z","lastTransitionTime":"2026-02-27T18:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.124863 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.124923 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.124939 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.124963 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.124979 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:48Z","lastTransitionTime":"2026-02-27T18:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.228325 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.228427 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.228445 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.228473 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.228490 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:48Z","lastTransitionTime":"2026-02-27T18:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.331463 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.331528 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.331549 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.331574 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.331622 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:48Z","lastTransitionTime":"2026-02-27T18:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.435002 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.435099 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.435120 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.435148 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.435166 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:48Z","lastTransitionTime":"2026-02-27T18:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.538127 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.538197 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.538217 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.538243 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.538260 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:48Z","lastTransitionTime":"2026-02-27T18:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.628562 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.628607 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.628625 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:48 crc kubenswrapper[4981]: E0227 18:46:48.628735 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:46:48 crc kubenswrapper[4981]: E0227 18:46:48.628928 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:46:48 crc kubenswrapper[4981]: E0227 18:46:48.629108 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.641238 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.641296 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.641313 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.641336 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.641356 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:48Z","lastTransitionTime":"2026-02-27T18:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.745247 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.745350 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.745401 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.745428 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.745446 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:48Z","lastTransitionTime":"2026-02-27T18:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.848098 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.848156 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.848177 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.848205 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.848228 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:48Z","lastTransitionTime":"2026-02-27T18:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.951368 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.951428 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.951445 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.951473 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:48 crc kubenswrapper[4981]: I0227 18:46:48.951491 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:48Z","lastTransitionTime":"2026-02-27T18:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.054228 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.054284 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.054296 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.054316 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.054329 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:49Z","lastTransitionTime":"2026-02-27T18:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.166709 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.166749 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.166758 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.166770 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.166779 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:49Z","lastTransitionTime":"2026-02-27T18:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.269470 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.269551 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.269570 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.269596 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.269616 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:49Z","lastTransitionTime":"2026-02-27T18:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.372147 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.372210 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.372227 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.372254 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.372272 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:49Z","lastTransitionTime":"2026-02-27T18:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.475305 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.475360 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.475379 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.475404 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.475422 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:49Z","lastTransitionTime":"2026-02-27T18:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.518485 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-fxkmm"] Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.518932 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fxkmm" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.522659 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.522704 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.523647 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.542996 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:49Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.561353 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:49Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.577979 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.578027 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.578044 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.578093 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.578112 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:49Z","lastTransitionTime":"2026-02-27T18:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.580921 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:49Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.593825 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcqj6\" (UniqueName: \"kubernetes.io/projected/64a9ab98-e01f-4125-8d91-49fb385b1e6b-kube-api-access-vcqj6\") pod \"node-resolver-fxkmm\" (UID: \"64a9ab98-e01f-4125-8d91-49fb385b1e6b\") " pod="openshift-dns/node-resolver-fxkmm" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.593888 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/64a9ab98-e01f-4125-8d91-49fb385b1e6b-hosts-file\") pod \"node-resolver-fxkmm\" (UID: \"64a9ab98-e01f-4125-8d91-49fb385b1e6b\") " pod="openshift-dns/node-resolver-fxkmm" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.596158 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:49Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.615300 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:49Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.635836 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:49Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.669253 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:49Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.681813 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.681866 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.681884 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.681908 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.681925 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:49Z","lastTransitionTime":"2026-02-27T18:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.692299 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:49Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.694641 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/64a9ab98-e01f-4125-8d91-49fb385b1e6b-hosts-file\") pod \"node-resolver-fxkmm\" (UID: \"64a9ab98-e01f-4125-8d91-49fb385b1e6b\") " pod="openshift-dns/node-resolver-fxkmm" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.694792 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcqj6\" (UniqueName: \"kubernetes.io/projected/64a9ab98-e01f-4125-8d91-49fb385b1e6b-kube-api-access-vcqj6\") pod \"node-resolver-fxkmm\" (UID: \"64a9ab98-e01f-4125-8d91-49fb385b1e6b\") " pod="openshift-dns/node-resolver-fxkmm" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.694850 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/64a9ab98-e01f-4125-8d91-49fb385b1e6b-hosts-file\") pod \"node-resolver-fxkmm\" (UID: \"64a9ab98-e01f-4125-8d91-49fb385b1e6b\") " pod="openshift-dns/node-resolver-fxkmm" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.711410 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:49Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.730186 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcqj6\" (UniqueName: \"kubernetes.io/projected/64a9ab98-e01f-4125-8d91-49fb385b1e6b-kube-api-access-vcqj6\") pod \"node-resolver-fxkmm\" (UID: \"64a9ab98-e01f-4125-8d91-49fb385b1e6b\") " pod="openshift-dns/node-resolver-fxkmm" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.739796 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:49Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.785189 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.785247 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.785264 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.785289 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.785308 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:49Z","lastTransitionTime":"2026-02-27T18:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.843374 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fxkmm" Feb 27 18:46:49 crc kubenswrapper[4981]: W0227 18:46:49.863355 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64a9ab98_e01f_4125_8d91_49fb385b1e6b.slice/crio-894b74b7749b11414b9d8fe692fad87efe4a6624ac415b42d7ebee748973b32c WatchSource:0}: Error finding container 894b74b7749b11414b9d8fe692fad87efe4a6624ac415b42d7ebee748973b32c: Status 404 returned error can't find the container with id 894b74b7749b11414b9d8fe692fad87efe4a6624ac415b42d7ebee748973b32c Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.888571 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.888621 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.888638 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.888659 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.888676 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:49Z","lastTransitionTime":"2026-02-27T18:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.918086 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-5pm8g"] Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.918607 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.925972 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ktw87"] Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.927009 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.927080 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.927279 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-992xv"] Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.927501 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.927555 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.927771 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-992xv" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.929887 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.930194 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.930884 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.931125 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.931343 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.932361 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.932642 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.932709 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.932793 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.946909 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:49Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.960966 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:49Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.987136 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:49Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.992789 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.992856 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.992874 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.992903 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.992921 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:49Z","lastTransitionTime":"2026-02-27T18:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.997592 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtcfx\" (UniqueName: \"kubernetes.io/projected/1fefdc04-8285-4630-83d3-494dcc0216f6-kube-api-access-rtcfx\") pod \"machine-config-daemon-5pm8g\" (UID: \"1fefdc04-8285-4630-83d3-494dcc0216f6\") " pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.997664 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-multus-conf-dir\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.997701 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-host-var-lib-cni-multus\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.997735 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-host-run-multus-certs\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.997783 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-etc-kubernetes\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.997818 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-host-var-lib-kubelet\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.997849 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1fefdc04-8285-4630-83d3-494dcc0216f6-rootfs\") pod \"machine-config-daemon-5pm8g\" (UID: \"1fefdc04-8285-4630-83d3-494dcc0216f6\") " pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.997880 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.997914 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-os-release\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.997946 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2f03f89e-d428-4246-a710-23c47810b60e-cni-binary-copy\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.997978 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-host-run-k8s-cni-cncf-io\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.998013 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhf52\" (UniqueName: \"kubernetes.io/projected/2f03f89e-d428-4246-a710-23c47810b60e-kube-api-access-hhf52\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.998049 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-os-release\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.998114 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.998169 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-multus-cni-dir\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.998198 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-hostroot\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.998231 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1fefdc04-8285-4630-83d3-494dcc0216f6-mcd-auth-proxy-config\") pod \"machine-config-daemon-5pm8g\" (UID: \"1fefdc04-8285-4630-83d3-494dcc0216f6\") " pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.998264 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-cnibin\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.998302 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5nnd\" (UniqueName: \"kubernetes.io/projected/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-kube-api-access-g5nnd\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.998338 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-host-var-lib-cni-bin\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.998368 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-system-cni-dir\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.998398 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-cnibin\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.998453 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2f03f89e-d428-4246-a710-23c47810b60e-multus-daemon-config\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.998487 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-multus-socket-dir-parent\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.998551 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-cni-binary-copy\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.998581 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-host-run-netns\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.998616 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1fefdc04-8285-4630-83d3-494dcc0216f6-proxy-tls\") pod \"machine-config-daemon-5pm8g\" (UID: \"1fefdc04-8285-4630-83d3-494dcc0216f6\") " pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 18:46:49 crc kubenswrapper[4981]: I0227 18:46:49.998647 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-system-cni-dir\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.006362 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.025896 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.044670 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.061249 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.080964 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.096037 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.096127 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.096147 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.096175 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.096193 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:50Z","lastTransitionTime":"2026-02-27T18:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099234 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-multus-cni-dir\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099288 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-hostroot\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099328 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1fefdc04-8285-4630-83d3-494dcc0216f6-mcd-auth-proxy-config\") pod \"machine-config-daemon-5pm8g\" (UID: \"1fefdc04-8285-4630-83d3-494dcc0216f6\") " pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099367 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-cnibin\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099403 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5nnd\" (UniqueName: \"kubernetes.io/projected/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-kube-api-access-g5nnd\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099439 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-host-var-lib-cni-bin\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099473 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-cnibin\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099506 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-system-cni-dir\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099538 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2f03f89e-d428-4246-a710-23c47810b60e-multus-daemon-config\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099573 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-multus-socket-dir-parent\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099628 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-cni-binary-copy\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099661 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-host-run-netns\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099621 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099712 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-system-cni-dir\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099748 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1fefdc04-8285-4630-83d3-494dcc0216f6-proxy-tls\") pod \"machine-config-daemon-5pm8g\" (UID: \"1fefdc04-8285-4630-83d3-494dcc0216f6\") " pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099783 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-multus-conf-dir\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099831 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtcfx\" (UniqueName: \"kubernetes.io/projected/1fefdc04-8285-4630-83d3-494dcc0216f6-kube-api-access-rtcfx\") pod \"machine-config-daemon-5pm8g\" (UID: \"1fefdc04-8285-4630-83d3-494dcc0216f6\") " pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099868 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-host-var-lib-cni-multus\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099900 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-host-run-multus-certs\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099931 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-etc-kubernetes\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.099976 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1fefdc04-8285-4630-83d3-494dcc0216f6-rootfs\") pod \"machine-config-daemon-5pm8g\" (UID: \"1fefdc04-8285-4630-83d3-494dcc0216f6\") " pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.100010 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.100041 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-os-release\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.100108 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2f03f89e-d428-4246-a710-23c47810b60e-cni-binary-copy\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.100141 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-host-run-k8s-cni-cncf-io\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.100149 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-multus-cni-dir\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.100172 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-host-var-lib-kubelet\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.100207 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhf52\" (UniqueName: \"kubernetes.io/projected/2f03f89e-d428-4246-a710-23c47810b60e-kube-api-access-hhf52\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.100217 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-hostroot\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.100243 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-os-release\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.100281 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.100472 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-host-run-multus-certs\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.100642 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-etc-kubernetes\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.100753 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1fefdc04-8285-4630-83d3-494dcc0216f6-rootfs\") pod \"machine-config-daemon-5pm8g\" (UID: \"1fefdc04-8285-4630-83d3-494dcc0216f6\") " pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.101109 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-multus-socket-dir-parent\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.101229 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-host-var-lib-kubelet\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.101251 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.100042 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-system-cni-dir\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.101406 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-os-release\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.101410 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-host-var-lib-cni-bin\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.101406 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-cnibin\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.101444 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-host-run-k8s-cni-cncf-io\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.101492 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-cnibin\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.101597 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-host-var-lib-cni-multus\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.101619 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-multus-conf-dir\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.101633 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-host-run-netns\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.101634 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2f03f89e-d428-4246-a710-23c47810b60e-os-release\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.101651 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-system-cni-dir\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.102192 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1fefdc04-8285-4630-83d3-494dcc0216f6-mcd-auth-proxy-config\") pod \"machine-config-daemon-5pm8g\" (UID: \"1fefdc04-8285-4630-83d3-494dcc0216f6\") " pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.102724 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2f03f89e-d428-4246-a710-23c47810b60e-cni-binary-copy\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.102798 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2f03f89e-d428-4246-a710-23c47810b60e-multus-daemon-config\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.102857 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.104137 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-cni-binary-copy\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.110444 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1fefdc04-8285-4630-83d3-494dcc0216f6-proxy-tls\") pod \"machine-config-daemon-5pm8g\" (UID: \"1fefdc04-8285-4630-83d3-494dcc0216f6\") " pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.120890 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.129400 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtcfx\" (UniqueName: \"kubernetes.io/projected/1fefdc04-8285-4630-83d3-494dcc0216f6-kube-api-access-rtcfx\") pod \"machine-config-daemon-5pm8g\" (UID: \"1fefdc04-8285-4630-83d3-494dcc0216f6\") " pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.132572 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhf52\" (UniqueName: \"kubernetes.io/projected/2f03f89e-d428-4246-a710-23c47810b60e-kube-api-access-hhf52\") pod \"multus-992xv\" (UID: \"2f03f89e-d428-4246-a710-23c47810b60e\") " pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.133102 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5nnd\" (UniqueName: \"kubernetes.io/projected/ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d-kube-api-access-g5nnd\") pod \"multus-additional-cni-plugins-ktw87\" (UID: \"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\") " pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.139322 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.158687 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.171984 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fxkmm" event={"ID":"64a9ab98-e01f-4125-8d91-49fb385b1e6b","Type":"ContainerStarted","Data":"894b74b7749b11414b9d8fe692fad87efe4a6624ac415b42d7ebee748973b32c"} Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.175733 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.200943 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.201007 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.201028 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.201079 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.201100 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:50Z","lastTransitionTime":"2026-02-27T18:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.206464 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.229882 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.247891 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.249616 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.262107 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ktw87" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.271488 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-992xv" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.273381 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.296166 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.304856 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.304992 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.305113 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.305218 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.305306 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:50Z","lastTransitionTime":"2026-02-27T18:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.313982 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.319649 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6rlwn"] Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.320808 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.324539 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.324705 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.324789 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.325162 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.325646 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.325872 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.326304 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.337357 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.361236 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.378411 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.398219 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.402789 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0918866b-8c49-4332-bb4d-bea02b35f047-ovnkube-config\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.402823 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-kubelet\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.402847 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-run-openvswitch\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.402868 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-node-log\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.402950 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0918866b-8c49-4332-bb4d-bea02b35f047-ovnkube-script-lib\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.403117 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-log-socket\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.403175 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-systemd-units\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.403228 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-cni-netd\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.403279 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-run-ovn\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.403326 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0918866b-8c49-4332-bb4d-bea02b35f047-env-overrides\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.403370 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0918866b-8c49-4332-bb4d-bea02b35f047-ovn-node-metrics-cert\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.403450 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-run-netns\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.403558 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-run-systemd\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.403611 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-slash\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.403643 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-run-ovn-kubernetes\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.403675 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.403712 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-etc-openvswitch\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.403765 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm8jq\" (UniqueName: \"kubernetes.io/projected/0918866b-8c49-4332-bb4d-bea02b35f047-kube-api-access-lm8jq\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.403799 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-cni-bin\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.403861 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-var-lib-openvswitch\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.409459 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.409507 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.409529 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.409558 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.409580 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:50Z","lastTransitionTime":"2026-02-27T18:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.414979 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.427759 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.439619 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.455714 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.481654 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.499790 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.507585 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-cni-bin\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.507639 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-var-lib-openvswitch\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.507661 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-run-openvswitch\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.507717 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-node-log\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.507740 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0918866b-8c49-4332-bb4d-bea02b35f047-ovnkube-config\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.507761 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-kubelet\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.507781 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-log-socket\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.507800 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0918866b-8c49-4332-bb4d-bea02b35f047-ovnkube-script-lib\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.507829 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-cni-netd\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.507847 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-systemd-units\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.507870 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-run-netns\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.507887 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-run-ovn\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.507907 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0918866b-8c49-4332-bb4d-bea02b35f047-env-overrides\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.507925 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0918866b-8c49-4332-bb4d-bea02b35f047-ovn-node-metrics-cert\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.507959 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-run-systemd\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.507981 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-slash\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.508002 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-run-ovn-kubernetes\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.508022 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.508044 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-etc-openvswitch\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.508081 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm8jq\" (UniqueName: \"kubernetes.io/projected/0918866b-8c49-4332-bb4d-bea02b35f047-kube-api-access-lm8jq\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.508123 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-cni-netd\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.508184 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-cni-bin\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.508219 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-var-lib-openvswitch\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.508252 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-run-openvswitch\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.508286 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-node-log\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.508400 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-systemd-units\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.508441 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-run-netns\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.508471 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-run-ovn\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.508959 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-log-socket\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.508991 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-kubelet\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.509068 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.509078 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-slash\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.509134 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-run-ovn-kubernetes\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.509158 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-etc-openvswitch\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.509213 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-run-systemd\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.509341 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0918866b-8c49-4332-bb4d-bea02b35f047-ovnkube-config\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.509496 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0918866b-8c49-4332-bb4d-bea02b35f047-env-overrides\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.509982 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0918866b-8c49-4332-bb4d-bea02b35f047-ovnkube-script-lib\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.513130 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.513164 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.513178 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.513162 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0918866b-8c49-4332-bb4d-bea02b35f047-ovn-node-metrics-cert\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.513195 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.513262 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:50Z","lastTransitionTime":"2026-02-27T18:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.516598 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.531839 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.538904 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm8jq\" (UniqueName: \"kubernetes.io/projected/0918866b-8c49-4332-bb4d-bea02b35f047-kube-api-access-lm8jq\") pod \"ovnkube-node-6rlwn\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.547249 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.559406 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.576468 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.590489 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.603698 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.616300 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.616342 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.616355 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.616373 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.616386 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:50Z","lastTransitionTime":"2026-02-27T18:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.617697 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.617788 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.617815 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.617847 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.617870 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:50Z","lastTransitionTime":"2026-02-27T18:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.621099 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.627461 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.627519 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.627598 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:50 crc kubenswrapper[4981]: E0227 18:46:50.627723 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:46:50 crc kubenswrapper[4981]: E0227 18:46:50.627850 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:46:50 crc kubenswrapper[4981]: E0227 18:46:50.628065 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:46:50 crc kubenswrapper[4981]: E0227 18:46:50.636082 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.640315 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.640362 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.640374 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.640394 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.640408 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:50Z","lastTransitionTime":"2026-02-27T18:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.642712 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: E0227 18:46:50.655695 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.658995 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.659453 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.659532 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.659545 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.659564 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.659576 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:50Z","lastTransitionTime":"2026-02-27T18:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:50 crc kubenswrapper[4981]: W0227 18:46:50.676372 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0918866b_8c49_4332_bb4d_bea02b35f047.slice/crio-f138b4f7e022848b500a07d4646746d1cb35f8efe4f7204646c2aeb809d39a00 WatchSource:0}: Error finding container f138b4f7e022848b500a07d4646746d1cb35f8efe4f7204646c2aeb809d39a00: Status 404 returned error can't find the container with id f138b4f7e022848b500a07d4646746d1cb35f8efe4f7204646c2aeb809d39a00 Feb 27 18:46:50 crc kubenswrapper[4981]: E0227 18:46:50.680788 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.684394 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.684439 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.684456 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.684480 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.684499 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:50Z","lastTransitionTime":"2026-02-27T18:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:50 crc kubenswrapper[4981]: E0227 18:46:50.701093 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.705146 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.705212 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.705234 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.705258 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.705277 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:50Z","lastTransitionTime":"2026-02-27T18:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:50 crc kubenswrapper[4981]: E0227 18:46:50.725162 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:50Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:50 crc kubenswrapper[4981]: E0227 18:46:50.725765 4981 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.727987 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.728038 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.728084 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.728112 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.728130 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:50Z","lastTransitionTime":"2026-02-27T18:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.831146 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.831202 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.831216 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.831237 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.831249 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:50Z","lastTransitionTime":"2026-02-27T18:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.935004 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.935095 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.935115 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.935141 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:50 crc kubenswrapper[4981]: I0227 18:46:50.935159 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:50Z","lastTransitionTime":"2026-02-27T18:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.037896 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.037935 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.037947 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.037964 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.037976 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:51Z","lastTransitionTime":"2026-02-27T18:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.140663 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.140745 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.140764 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.140800 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.140822 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:51Z","lastTransitionTime":"2026-02-27T18:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.179342 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-992xv" event={"ID":"2f03f89e-d428-4246-a710-23c47810b60e","Type":"ContainerStarted","Data":"624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c"} Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.179607 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-992xv" event={"ID":"2f03f89e-d428-4246-a710-23c47810b60e","Type":"ContainerStarted","Data":"2adfb2c9d3746229f3b9e8e50089844d0f72a0777e835c213e75d1494f3f1df9"} Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.181923 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerStarted","Data":"e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be"} Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.182037 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerStarted","Data":"cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569"} Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.182103 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerStarted","Data":"39f29aa00ace84f03865933587191e46a9ef0c825f0493a3e6407cfad83647d9"} Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.184111 4981 generic.go:334] "Generic (PLEG): container finished" podID="0918866b-8c49-4332-bb4d-bea02b35f047" containerID="17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74" exitCode=0 Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.184196 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerDied","Data":"17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74"} Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.184250 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerStarted","Data":"f138b4f7e022848b500a07d4646746d1cb35f8efe4f7204646c2aeb809d39a00"} Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.186174 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fxkmm" event={"ID":"64a9ab98-e01f-4125-8d91-49fb385b1e6b","Type":"ContainerStarted","Data":"4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b"} Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.191457 4981 generic.go:334] "Generic (PLEG): container finished" podID="ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d" containerID="7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349" exitCode=0 Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.191564 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" event={"ID":"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d","Type":"ContainerDied","Data":"7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349"} Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.191654 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" event={"ID":"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d","Type":"ContainerStarted","Data":"f555c92b32a6b79696876a8e6e3617355453b6626a5eee5e901a0624f819bce4"} Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.197322 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.215238 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.240184 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.245269 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.245334 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.245355 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.245385 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.245405 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:51Z","lastTransitionTime":"2026-02-27T18:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.278447 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.300336 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.315664 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.335741 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.349450 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.349491 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.349511 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.349538 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.349560 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:51Z","lastTransitionTime":"2026-02-27T18:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.356763 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.382937 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.398774 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.414032 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.427955 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.445077 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.452278 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.452323 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.452336 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.452355 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.452368 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:51Z","lastTransitionTime":"2026-02-27T18:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.461962 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.482630 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.495452 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.515627 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.534374 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.549652 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.554754 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.554795 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.554805 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.554819 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.554828 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:51Z","lastTransitionTime":"2026-02-27T18:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.565897 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.578622 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.591961 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.606068 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.623007 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.642514 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.656252 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.657982 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.658026 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.658038 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.658085 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.658101 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:51Z","lastTransitionTime":"2026-02-27T18:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.670721 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.679394 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.689813 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.699729 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.716223 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.730645 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.743726 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.760825 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.760887 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.760906 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.760932 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.760954 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:51Z","lastTransitionTime":"2026-02-27T18:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.767172 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.784948 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.799279 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.813607 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.827347 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.845504 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.862923 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.864881 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.865137 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.865269 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.865391 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.865529 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:51Z","lastTransitionTime":"2026-02-27T18:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.879520 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.900395 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:51Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.968482 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.968534 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.968548 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.968568 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:51 crc kubenswrapper[4981]: I0227 18:46:51.968581 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:51Z","lastTransitionTime":"2026-02-27T18:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.071662 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.071726 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.071746 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.071774 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.071792 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:52Z","lastTransitionTime":"2026-02-27T18:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.174835 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.175514 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.175533 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.175563 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.175583 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:52Z","lastTransitionTime":"2026-02-27T18:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.199681 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" event={"ID":"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d","Type":"ContainerStarted","Data":"0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9"} Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.208090 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerStarted","Data":"02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68"} Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.208175 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerStarted","Data":"0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf"} Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.208195 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerStarted","Data":"5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61"} Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.224876 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:52Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.240243 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:52Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.254317 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:52Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.274572 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:52Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.279778 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.279841 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.279861 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.279889 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.279907 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:52Z","lastTransitionTime":"2026-02-27T18:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.290848 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:52Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.327129 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:52Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.346101 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:52Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.362399 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:52Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.385016 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.385094 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.385115 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.385141 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.385157 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:52Z","lastTransitionTime":"2026-02-27T18:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.385418 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:52Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.417579 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:52Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.431163 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:52Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.446222 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:52Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.465029 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:52Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.482799 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:52Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.486884 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.486913 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.486922 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.486936 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.486945 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:52Z","lastTransitionTime":"2026-02-27T18:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.589300 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.589349 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.589361 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.589385 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.589398 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:52Z","lastTransitionTime":"2026-02-27T18:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.628433 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.628464 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.628488 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:52 crc kubenswrapper[4981]: E0227 18:46:52.628554 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:46:52 crc kubenswrapper[4981]: E0227 18:46:52.628720 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:46:52 crc kubenswrapper[4981]: E0227 18:46:52.628824 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.698729 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.698795 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.698813 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.698837 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.698855 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:52Z","lastTransitionTime":"2026-02-27T18:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.801681 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.801735 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.801751 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.801771 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.801787 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:52Z","lastTransitionTime":"2026-02-27T18:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.904373 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.904433 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.904451 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.904476 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:52 crc kubenswrapper[4981]: I0227 18:46:52.904493 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:52Z","lastTransitionTime":"2026-02-27T18:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.006697 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.006745 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.006763 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.006783 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.006798 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:53Z","lastTransitionTime":"2026-02-27T18:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.109750 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.109801 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.109814 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.109832 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.109845 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:53Z","lastTransitionTime":"2026-02-27T18:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.212634 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.212669 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.212681 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.212695 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.212705 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:53Z","lastTransitionTime":"2026-02-27T18:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.217037 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerStarted","Data":"bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39"} Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.217138 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerStarted","Data":"082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501"} Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.217166 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerStarted","Data":"e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444"} Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.219112 4981 generic.go:334] "Generic (PLEG): container finished" podID="ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d" containerID="0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9" exitCode=0 Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.219151 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" event={"ID":"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d","Type":"ContainerDied","Data":"0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9"} Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.241318 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:53Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.265576 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:53Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.280475 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:53Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.316428 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.316481 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.316763 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.316805 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.316824 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:53Z","lastTransitionTime":"2026-02-27T18:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.317981 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:53Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.340562 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:53Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.360516 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:53Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.378500 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:53Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.391973 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:53Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.409242 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:53Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.423707 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.423777 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.423795 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.423825 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.423843 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:53Z","lastTransitionTime":"2026-02-27T18:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.433391 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:53Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.471516 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:53Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.497939 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:53Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.517573 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:53Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.527987 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.528030 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.528049 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.528107 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.528124 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:53Z","lastTransitionTime":"2026-02-27T18:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.537691 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:53Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.633405 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.633459 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.633478 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.633509 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.633527 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:53Z","lastTransitionTime":"2026-02-27T18:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.736864 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.736927 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.736945 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.736969 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.736987 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:53Z","lastTransitionTime":"2026-02-27T18:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.840642 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.840710 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.840730 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.840756 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.840776 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:53Z","lastTransitionTime":"2026-02-27T18:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.945200 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.945272 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.945292 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.945319 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:53 crc kubenswrapper[4981]: I0227 18:46:53.945341 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:53Z","lastTransitionTime":"2026-02-27T18:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.048039 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.048133 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.048151 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.048177 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.048195 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:54Z","lastTransitionTime":"2026-02-27T18:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.151203 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.151266 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.151283 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.151309 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.151327 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:54Z","lastTransitionTime":"2026-02-27T18:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.226170 4981 generic.go:334] "Generic (PLEG): container finished" podID="ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d" containerID="a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076" exitCode=0 Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.226227 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" event={"ID":"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d","Type":"ContainerDied","Data":"a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076"} Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.245142 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:54Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.258528 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.259170 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.259203 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.259230 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.259249 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:54Z","lastTransitionTime":"2026-02-27T18:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.269094 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:54Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.288244 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:54Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.319161 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:54Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.339827 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:54Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.359707 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:54Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.361676 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.361719 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.361737 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.361764 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.361782 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:54Z","lastTransitionTime":"2026-02-27T18:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.378481 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:54Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.398501 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:54Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.414791 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:54Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.433717 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:54Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.449459 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:54Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.464767 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.464821 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.464838 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.464862 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.464880 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:54Z","lastTransitionTime":"2026-02-27T18:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.467378 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:54Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.491842 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:54Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.521209 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:54Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.569793 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.569852 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.569870 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.569897 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.569914 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:54Z","lastTransitionTime":"2026-02-27T18:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.628572 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.628619 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.628650 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:54 crc kubenswrapper[4981]: E0227 18:46:54.628768 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:46:54 crc kubenswrapper[4981]: E0227 18:46:54.628882 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:46:54 crc kubenswrapper[4981]: E0227 18:46:54.629092 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.682437 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.682518 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.682545 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.682576 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.682602 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:54Z","lastTransitionTime":"2026-02-27T18:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.785442 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.785517 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.785537 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.785912 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.786024 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:54Z","lastTransitionTime":"2026-02-27T18:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.888241 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.888280 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.888293 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.888313 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.888325 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:54Z","lastTransitionTime":"2026-02-27T18:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.990919 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.990959 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.990971 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.990989 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:54 crc kubenswrapper[4981]: I0227 18:46:54.991001 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:54Z","lastTransitionTime":"2026-02-27T18:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.094370 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.094417 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.094434 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.094477 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.094495 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:55Z","lastTransitionTime":"2026-02-27T18:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.197355 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.197406 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.197423 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.197447 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.197464 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:55Z","lastTransitionTime":"2026-02-27T18:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.237353 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerStarted","Data":"bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e"} Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.241497 4981 generic.go:334] "Generic (PLEG): container finished" podID="ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d" containerID="edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805" exitCode=0 Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.241544 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" event={"ID":"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d","Type":"ContainerDied","Data":"edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805"} Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.261671 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:55Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.300838 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.300899 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.300917 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.300946 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.300963 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:55Z","lastTransitionTime":"2026-02-27T18:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.308389 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:55Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.330855 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:55Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.349822 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:55Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.370731 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:55Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.389954 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:55Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.404915 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.404967 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.404990 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.405020 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.405042 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:55Z","lastTransitionTime":"2026-02-27T18:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.407429 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:55Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.423008 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:55Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.438126 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:55Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.462514 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:55Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.497836 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:55Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.508104 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.508185 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.508206 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.508239 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.508259 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:55Z","lastTransitionTime":"2026-02-27T18:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.516702 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:55Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.532958 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:55Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.553479 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:55Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.611855 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.611906 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.611918 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.611943 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.611955 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:55Z","lastTransitionTime":"2026-02-27T18:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.715547 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.715594 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.715605 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.715625 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.715637 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:55Z","lastTransitionTime":"2026-02-27T18:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.818642 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.818706 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.818732 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.818760 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.818779 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:55Z","lastTransitionTime":"2026-02-27T18:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.922450 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.922507 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.922523 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.922545 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:55 crc kubenswrapper[4981]: I0227 18:46:55.922563 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:55Z","lastTransitionTime":"2026-02-27T18:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.026146 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.026225 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.026243 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.026265 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.026285 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:56Z","lastTransitionTime":"2026-02-27T18:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.130116 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.130177 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.130195 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.130220 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.130237 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:56Z","lastTransitionTime":"2026-02-27T18:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.233912 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.233961 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.233978 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.234003 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.234020 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:56Z","lastTransitionTime":"2026-02-27T18:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.249811 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" event={"ID":"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d","Type":"ContainerStarted","Data":"445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11"} Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.270231 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.287001 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.307174 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.326355 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.336887 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.336932 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.336950 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.336977 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.336993 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:56Z","lastTransitionTime":"2026-02-27T18:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.344115 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.360568 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.391683 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.410172 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.431593 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.439903 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.439972 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.439993 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.440021 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.440041 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:56Z","lastTransitionTime":"2026-02-27T18:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.444914 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-wcwdj"] Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.445508 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wcwdj" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.447759 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.447817 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.447875 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.447939 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.454225 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.469895 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0-serviceca\") pod \"node-ca-wcwdj\" (UID: \"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\") " pod="openshift-image-registry/node-ca-wcwdj" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.469977 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0-host\") pod \"node-ca-wcwdj\" (UID: \"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\") " pod="openshift-image-registry/node-ca-wcwdj" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.470017 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twhd9\" (UniqueName: \"kubernetes.io/projected/0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0-kube-api-access-twhd9\") pod \"node-ca-wcwdj\" (UID: \"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\") " pod="openshift-image-registry/node-ca-wcwdj" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.471729 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.487552 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.510583 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.541039 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.543631 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.543675 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.543696 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.543725 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.543746 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:56Z","lastTransitionTime":"2026-02-27T18:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.560012 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.571337 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0-serviceca\") pod \"node-ca-wcwdj\" (UID: \"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\") " pod="openshift-image-registry/node-ca-wcwdj" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.571435 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0-host\") pod \"node-ca-wcwdj\" (UID: \"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\") " pod="openshift-image-registry/node-ca-wcwdj" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.571483 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twhd9\" (UniqueName: \"kubernetes.io/projected/0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0-kube-api-access-twhd9\") pod \"node-ca-wcwdj\" (UID: \"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\") " pod="openshift-image-registry/node-ca-wcwdj" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.571647 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0-host\") pod \"node-ca-wcwdj\" (UID: \"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\") " pod="openshift-image-registry/node-ca-wcwdj" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.573973 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0-serviceca\") pod \"node-ca-wcwdj\" (UID: \"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\") " pod="openshift-image-registry/node-ca-wcwdj" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.575395 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.591494 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.596599 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twhd9\" (UniqueName: \"kubernetes.io/projected/0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0-kube-api-access-twhd9\") pod \"node-ca-wcwdj\" (UID: \"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\") " pod="openshift-image-registry/node-ca-wcwdj" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.614267 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.627973 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.628028 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.627973 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:56 crc kubenswrapper[4981]: E0227 18:46:56.628159 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:46:56 crc kubenswrapper[4981]: E0227 18:46:56.628303 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:46:56 crc kubenswrapper[4981]: E0227 18:46:56.628405 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.644121 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.651507 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.651573 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.651592 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.651617 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.651647 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:56Z","lastTransitionTime":"2026-02-27T18:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.667239 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.684036 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.699292 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.715211 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.734719 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.750209 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.754200 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.754250 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.754267 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.754292 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.754311 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:56Z","lastTransitionTime":"2026-02-27T18:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.762636 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.793326 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.812838 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.816039 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wcwdj" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.834045 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:56Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:56 crc kubenswrapper[4981]: W0227 18:46:56.839676 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f91b4bf_a71e_44b8_95aa_fa8c0439c2e0.slice/crio-c144e36796d6ecebff35d3be2914e3da39245fcb5836cc6d03f48d54ec15e9db WatchSource:0}: Error finding container c144e36796d6ecebff35d3be2914e3da39245fcb5836cc6d03f48d54ec15e9db: Status 404 returned error can't find the container with id c144e36796d6ecebff35d3be2914e3da39245fcb5836cc6d03f48d54ec15e9db Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.856744 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.856794 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.856813 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.856838 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.856856 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:56Z","lastTransitionTime":"2026-02-27T18:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.959699 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.960139 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.960169 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.960201 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:56 crc kubenswrapper[4981]: I0227 18:46:56.960224 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:56Z","lastTransitionTime":"2026-02-27T18:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.063401 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.063464 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.063483 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.063510 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.063532 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:57Z","lastTransitionTime":"2026-02-27T18:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.166129 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.166186 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.166206 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.166232 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.166251 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:57Z","lastTransitionTime":"2026-02-27T18:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.258911 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerStarted","Data":"08c00d12667b939a0a72f9df56f2e8294bdb934b81fe13c07c0d6f83b6045c54"} Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.259239 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.259430 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.259490 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.269352 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.269382 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" event={"ID":"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d","Type":"ContainerDied","Data":"445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11"} Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.269339 4981 generic.go:334] "Generic (PLEG): container finished" podID="ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d" containerID="445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11" exitCode=0 Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.269397 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.269558 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.269586 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.269603 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:57Z","lastTransitionTime":"2026-02-27T18:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.272274 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.272744 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wcwdj" event={"ID":"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0","Type":"ContainerStarted","Data":"b78e1219b4a49e7b612e1b1ab3b6fe58f43c836ad2a06aa11813117ceef60e00"} Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.272842 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wcwdj" event={"ID":"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0","Type":"ContainerStarted","Data":"c144e36796d6ecebff35d3be2914e3da39245fcb5836cc6d03f48d54ec15e9db"} Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.297580 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.301832 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.301930 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.327974 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c00d12667b939a0a72f9df56f2e8294bdb934b81fe13c07c0d6f83b6045c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.345473 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.361255 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.372830 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.372895 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.372912 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.372938 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.372956 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:57Z","lastTransitionTime":"2026-02-27T18:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.383795 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.405691 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.422087 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.438323 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.448954 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.475511 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.477699 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.477764 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.477779 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.477799 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.477819 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:57Z","lastTransitionTime":"2026-02-27T18:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.497165 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.524501 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.543215 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.556382 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.572819 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.581833 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.581881 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.581898 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.581923 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.581940 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:57Z","lastTransitionTime":"2026-02-27T18:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.589351 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.607505 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.628906 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.648694 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.664009 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.684162 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.684770 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.684965 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.685139 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.685271 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.685435 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:57Z","lastTransitionTime":"2026-02-27T18:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.699980 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78e1219b4a49e7b612e1b1ab3b6fe58f43c836ad2a06aa11813117ceef60e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.729771 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.751276 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.770302 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.788312 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.789519 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.789579 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.789596 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.789623 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.789644 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:57Z","lastTransitionTime":"2026-02-27T18:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.824590 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c00d12667b939a0a72f9df56f2e8294bdb934b81fe13c07c0d6f83b6045c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.842013 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.865270 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:57Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.892624 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.892679 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.892697 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.892724 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.892742 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:57Z","lastTransitionTime":"2026-02-27T18:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.995958 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.996015 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.996032 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.996079 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:57 crc kubenswrapper[4981]: I0227 18:46:57.996098 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:57Z","lastTransitionTime":"2026-02-27T18:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.099036 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.099123 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.099140 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.099165 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.099185 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:58Z","lastTransitionTime":"2026-02-27T18:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.202182 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.202237 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.202255 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.202282 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.202301 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:58Z","lastTransitionTime":"2026-02-27T18:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.281618 4981 generic.go:334] "Generic (PLEG): container finished" podID="ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d" containerID="ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7" exitCode=0 Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.281694 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" event={"ID":"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d","Type":"ContainerDied","Data":"ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7"} Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.301342 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:58Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.306303 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.306385 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.306410 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.306439 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.306458 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:58Z","lastTransitionTime":"2026-02-27T18:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.330229 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:58Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.364275 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c00d12667b939a0a72f9df56f2e8294bdb934b81fe13c07c0d6f83b6045c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:58Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.385920 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:58Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.404350 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:58Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.409909 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.412694 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.412734 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.412759 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.412776 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:58Z","lastTransitionTime":"2026-02-27T18:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.424356 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:58Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.444254 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78e1219b4a49e7b612e1b1ab3b6fe58f43c836ad2a06aa11813117ceef60e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:58Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.480481 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:58Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.504329 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:58Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.520145 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.520196 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.520208 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.520228 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.520245 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:58Z","lastTransitionTime":"2026-02-27T18:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.523585 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:58Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.571637 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:58Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.593128 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:58Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.611237 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:58Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.623668 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.623712 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.623728 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.623751 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.623779 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:58Z","lastTransitionTime":"2026-02-27T18:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.627520 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:58Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.627866 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.627917 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:46:58 crc kubenswrapper[4981]: E0227 18:46:58.627968 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:46:58 crc kubenswrapper[4981]: E0227 18:46:58.628111 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.627918 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:46:58 crc kubenswrapper[4981]: E0227 18:46:58.628214 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.642797 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:58Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.726748 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.726802 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.726817 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.726837 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.726850 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:58Z","lastTransitionTime":"2026-02-27T18:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.829629 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.829674 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.829687 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.829706 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.829719 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:58Z","lastTransitionTime":"2026-02-27T18:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.932939 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.932992 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.933009 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.933030 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:58 crc kubenswrapper[4981]: I0227 18:46:58.933043 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:58Z","lastTransitionTime":"2026-02-27T18:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.035703 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.035747 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.035758 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.035773 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.035785 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:59Z","lastTransitionTime":"2026-02-27T18:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.139311 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.139401 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.139421 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.139448 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.139472 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:59Z","lastTransitionTime":"2026-02-27T18:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.242166 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.242226 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.242244 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.242271 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.242290 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:59Z","lastTransitionTime":"2026-02-27T18:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.290910 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" event={"ID":"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d","Type":"ContainerStarted","Data":"912de196edcc2cd9b4e8b52e0e65b3a51c2269ad31c6f21a09fafac4af4ad6c2"} Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.317551 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://912de196edcc2cd9b4e8b52e0e65b3a51c2269ad31c6f21a09fafac4af4ad6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:59Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.345700 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.345769 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.345788 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.345816 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.345834 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:59Z","lastTransitionTime":"2026-02-27T18:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.349982 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c00d12667b939a0a72f9df56f2e8294bdb934b81fe13c07c0d6f83b6045c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:59Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.367890 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:59Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.386993 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:59Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.406782 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:59Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.427609 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:59Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.449553 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.449599 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.449616 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.449641 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.449658 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:59Z","lastTransitionTime":"2026-02-27T18:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.450888 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:59Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.471324 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:59Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.495620 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:59Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.516568 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:59Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.545698 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:59Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.552018 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.552114 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.552142 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.552175 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.552201 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:59Z","lastTransitionTime":"2026-02-27T18:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.566716 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78e1219b4a49e7b612e1b1ab3b6fe58f43c836ad2a06aa11813117ceef60e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:59Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.611815 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:59Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.633431 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:59Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.654568 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.654628 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.654649 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.654674 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.654692 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:59Z","lastTransitionTime":"2026-02-27T18:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.656716 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:46:59Z is after 2025-08-24T17:21:41Z" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.758034 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.758099 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.758111 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.758130 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.758142 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:59Z","lastTransitionTime":"2026-02-27T18:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.859954 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.860083 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.860106 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.860132 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.860152 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:59Z","lastTransitionTime":"2026-02-27T18:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.963275 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.963309 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.963322 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.963338 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:46:59 crc kubenswrapper[4981]: I0227 18:46:59.963349 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:46:59Z","lastTransitionTime":"2026-02-27T18:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.067001 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.067045 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.067102 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.067124 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.067140 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:00Z","lastTransitionTime":"2026-02-27T18:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.170307 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.170349 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.170361 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.170378 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.170392 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:00Z","lastTransitionTime":"2026-02-27T18:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.279121 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.279175 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.279194 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.279220 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.279238 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:00Z","lastTransitionTime":"2026-02-27T18:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.381854 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.381920 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.381939 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.381964 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.381982 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:00Z","lastTransitionTime":"2026-02-27T18:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.422115 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.422272 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.422540 4981 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.422550 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:32.422508206 +0000 UTC m=+151.901289396 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.422632 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 18:47:32.422608459 +0000 UTC m=+151.901389649 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.484960 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.485023 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.485042 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.485097 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.485118 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:00Z","lastTransitionTime":"2026-02-27T18:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.523226 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.523282 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.523340 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.523483 4981 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.523538 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.523564 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.523586 4981 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.523592 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 18:47:32.523564739 +0000 UTC m=+152.002345929 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.523638 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.523678 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.523699 4981 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.523651 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 18:47:32.523630341 +0000 UTC m=+152.002411541 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.523839 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 18:47:32.523796985 +0000 UTC m=+152.002578185 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.588517 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.588577 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.588598 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.588624 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.588644 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:00Z","lastTransitionTime":"2026-02-27T18:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.628476 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.628569 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.628580 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.628693 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.628835 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.628923 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.692105 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.692167 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.692184 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.692208 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.692229 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:00Z","lastTransitionTime":"2026-02-27T18:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.795486 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.795546 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.795565 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.795590 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.795610 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:00Z","lastTransitionTime":"2026-02-27T18:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.855841 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.855901 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.855920 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.855947 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.855965 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:00Z","lastTransitionTime":"2026-02-27T18:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.875748 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:00Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.880762 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.880807 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.880824 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.880845 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.880860 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:00Z","lastTransitionTime":"2026-02-27T18:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.900554 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:00Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.905878 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.905931 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.905948 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.905968 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.905983 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:00Z","lastTransitionTime":"2026-02-27T18:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.926249 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:00Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.931092 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.931141 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.931157 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.931179 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.931196 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:00Z","lastTransitionTime":"2026-02-27T18:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.950963 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:00Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.955690 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.955760 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.955782 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.955807 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.955831 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:00Z","lastTransitionTime":"2026-02-27T18:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.975095 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:00Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:00 crc kubenswrapper[4981]: E0227 18:47:00.975337 4981 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.977163 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.977225 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.977251 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.977277 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:00 crc kubenswrapper[4981]: I0227 18:47:00.977295 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:00Z","lastTransitionTime":"2026-02-27T18:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.080238 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.080287 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.080304 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.080328 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.080345 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:01Z","lastTransitionTime":"2026-02-27T18:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.183418 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.183464 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.183482 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.183505 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.183522 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:01Z","lastTransitionTime":"2026-02-27T18:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.286683 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.286750 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.286767 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.286791 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.286813 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:01Z","lastTransitionTime":"2026-02-27T18:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.299998 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlwn_0918866b-8c49-4332-bb4d-bea02b35f047/ovnkube-controller/0.log" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.304231 4981 generic.go:334] "Generic (PLEG): container finished" podID="0918866b-8c49-4332-bb4d-bea02b35f047" containerID="08c00d12667b939a0a72f9df56f2e8294bdb934b81fe13c07c0d6f83b6045c54" exitCode=1 Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.304300 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerDied","Data":"08c00d12667b939a0a72f9df56f2e8294bdb934b81fe13c07c0d6f83b6045c54"} Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.305580 4981 scope.go:117] "RemoveContainer" containerID="08c00d12667b939a0a72f9df56f2e8294bdb934b81fe13c07c0d6f83b6045c54" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.326607 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.345600 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.367381 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.386794 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.389614 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.389683 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.389736 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.389769 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.389792 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:01Z","lastTransitionTime":"2026-02-27T18:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.404674 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.422236 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78e1219b4a49e7b612e1b1ab3b6fe58f43c836ad2a06aa11813117ceef60e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.451004 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.468424 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.488758 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.493193 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.493244 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.493262 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.493287 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.493306 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:01Z","lastTransitionTime":"2026-02-27T18:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.511775 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.536588 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.556434 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.575167 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: E0227 18:47:01.594284 4981 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.598385 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://912de196edcc2cd9b4e8b52e0e65b3a51c2269ad31c6f21a09fafac4af4ad6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.629328 4981 scope.go:117] "RemoveContainer" containerID="8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.629459 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c00d12667b939a0a72f9df56f2e8294bdb934b81fe13c07c0d6f83b6045c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08c00d12667b939a0a72f9df56f2e8294bdb934b81fe13c07c0d6f83b6045c54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\" 8\\\\nI0227 18:47:00.315038 6863 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 18:47:00.315044 6863 handler.go:208] Removed *v1.Node event handler 7\\\\nI0227 18:47:00.315066 6863 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 18:47:00.315072 6863 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0227 18:47:00.315319 6863 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.315816 6863 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316143 6863 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316247 6863 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316527 6863 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316680 6863 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.318131 6863 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 18:47:00.318220 6863 factory.go:656] Stopping watch factory\\\\nI0227 18:47:00.318248 6863 ovnkube.go:599] Stopped ovnkube\\\\nI0227 18:47:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.648628 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.666148 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.686391 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.706927 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: E0227 18:47:01.715735 4981 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.728544 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.747024 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.762874 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78e1219b4a49e7b612e1b1ab3b6fe58f43c836ad2a06aa11813117ceef60e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.796073 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.820880 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.840120 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.857557 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.872561 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.888601 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.912011 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://912de196edcc2cd9b4e8b52e0e65b3a51c2269ad31c6f21a09fafac4af4ad6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:01 crc kubenswrapper[4981]: I0227 18:47:01.943650 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08c00d12667b939a0a72f9df56f2e8294bdb934b81fe13c07c0d6f83b6045c54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08c00d12667b939a0a72f9df56f2e8294bdb934b81fe13c07c0d6f83b6045c54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\" 8\\\\nI0227 18:47:00.315038 6863 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 18:47:00.315044 6863 handler.go:208] Removed *v1.Node event handler 7\\\\nI0227 18:47:00.315066 6863 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 18:47:00.315072 6863 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0227 18:47:00.315319 6863 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.315816 6863 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316143 6863 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316247 6863 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316527 6863 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316680 6863 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.318131 6863 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 18:47:00.318220 6863 factory.go:656] Stopping watch factory\\\\nI0227 18:47:00.318248 6863 ovnkube.go:599] Stopped ovnkube\\\\nI0227 18:47:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:01Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.311430 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlwn_0918866b-8c49-4332-bb4d-bea02b35f047/ovnkube-controller/0.log" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.316317 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerStarted","Data":"bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e"} Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.317933 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.341923 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.362459 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.389596 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.406997 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.452208 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.471790 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.498734 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.509281 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.519431 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78e1219b4a49e7b612e1b1ab3b6fe58f43c836ad2a06aa11813117ceef60e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.541108 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.550004 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.562875 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9"] Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.563310 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.565013 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.565114 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.565489 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.579597 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://912de196edcc2cd9b4e8b52e0e65b3a51c2269ad31c6f21a09fafac4af4ad6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.594513 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08c00d12667b939a0a72f9df56f2e8294bdb934b81fe13c07c0d6f83b6045c54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\" 8\\\\nI0227 18:47:00.315038 6863 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 18:47:00.315044 6863 handler.go:208] Removed *v1.Node event handler 7\\\\nI0227 18:47:00.315066 6863 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 18:47:00.315072 6863 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0227 18:47:00.315319 6863 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.315816 6863 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316143 6863 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316247 6863 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316527 6863 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316680 6863 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.318131 6863 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 18:47:00.318220 6863 factory.go:656] Stopping watch factory\\\\nI0227 18:47:00.318248 6863 ovnkube.go:599] Stopped ovnkube\\\\nI0227 18:47:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.608224 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.622893 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.628459 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.628504 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.628529 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:02 crc kubenswrapper[4981]: E0227 18:47:02.628551 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:02 crc kubenswrapper[4981]: E0227 18:47:02.628658 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:02 crc kubenswrapper[4981]: E0227 18:47:02.628708 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.639931 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://912de196edcc2cd9b4e8b52e0e65b3a51c2269ad31c6f21a09fafac4af4ad6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.649975 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/252b7f50-0e8b-4b8e-b165-79233ca02bf3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8szb9\" (UID: \"252b7f50-0e8b-4b8e-b165-79233ca02bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.650045 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5krs\" (UniqueName: \"kubernetes.io/projected/252b7f50-0e8b-4b8e-b165-79233ca02bf3-kube-api-access-c5krs\") pod \"ovnkube-control-plane-749d76644c-8szb9\" (UID: \"252b7f50-0e8b-4b8e-b165-79233ca02bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.650158 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/252b7f50-0e8b-4b8e-b165-79233ca02bf3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8szb9\" (UID: \"252b7f50-0e8b-4b8e-b165-79233ca02bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.650234 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/252b7f50-0e8b-4b8e-b165-79233ca02bf3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8szb9\" (UID: \"252b7f50-0e8b-4b8e-b165-79233ca02bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.659523 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08c00d12667b939a0a72f9df56f2e8294bdb934b81fe13c07c0d6f83b6045c54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\" 8\\\\nI0227 18:47:00.315038 6863 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 18:47:00.315044 6863 handler.go:208] Removed *v1.Node event handler 7\\\\nI0227 18:47:00.315066 6863 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 18:47:00.315072 6863 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0227 18:47:00.315319 6863 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.315816 6863 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316143 6863 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316247 6863 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316527 6863 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316680 6863 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.318131 6863 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 18:47:00.318220 6863 factory.go:656] Stopping watch factory\\\\nI0227 18:47:00.318248 6863 ovnkube.go:599] Stopped ovnkube\\\\nI0227 18:47:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.674273 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.688270 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.700187 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.718640 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.736533 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.750467 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.750767 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/252b7f50-0e8b-4b8e-b165-79233ca02bf3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8szb9\" (UID: \"252b7f50-0e8b-4b8e-b165-79233ca02bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.750830 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5krs\" (UniqueName: \"kubernetes.io/projected/252b7f50-0e8b-4b8e-b165-79233ca02bf3-kube-api-access-c5krs\") pod \"ovnkube-control-plane-749d76644c-8szb9\" (UID: \"252b7f50-0e8b-4b8e-b165-79233ca02bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.750900 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/252b7f50-0e8b-4b8e-b165-79233ca02bf3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8szb9\" (UID: \"252b7f50-0e8b-4b8e-b165-79233ca02bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.750973 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/252b7f50-0e8b-4b8e-b165-79233ca02bf3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8szb9\" (UID: \"252b7f50-0e8b-4b8e-b165-79233ca02bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.751946 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/252b7f50-0e8b-4b8e-b165-79233ca02bf3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8szb9\" (UID: \"252b7f50-0e8b-4b8e-b165-79233ca02bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.751951 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/252b7f50-0e8b-4b8e-b165-79233ca02bf3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8szb9\" (UID: \"252b7f50-0e8b-4b8e-b165-79233ca02bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.764721 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/252b7f50-0e8b-4b8e-b165-79233ca02bf3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8szb9\" (UID: \"252b7f50-0e8b-4b8e-b165-79233ca02bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.766955 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5krs\" (UniqueName: \"kubernetes.io/projected/252b7f50-0e8b-4b8e-b165-79233ca02bf3-kube-api-access-c5krs\") pod \"ovnkube-control-plane-749d76644c-8szb9\" (UID: \"252b7f50-0e8b-4b8e-b165-79233ca02bf3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.778552 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.798977 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.814921 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.828666 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78e1219b4a49e7b612e1b1ab3b6fe58f43c836ad2a06aa11813117ceef60e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.842910 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"252b7f50-0e8b-4b8e-b165-79233ca02bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8szb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.858991 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.873435 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:02Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:02 crc kubenswrapper[4981]: I0227 18:47:02.876684 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" Feb 27 18:47:02 crc kubenswrapper[4981]: W0227 18:47:02.893755 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod252b7f50_0e8b_4b8e_b165_79233ca02bf3.slice/crio-4f43235c58f69556a446cbc6ca3e30648a688de8141fcd69a4a5bd53ff429aa4 WatchSource:0}: Error finding container 4f43235c58f69556a446cbc6ca3e30648a688de8141fcd69a4a5bd53ff429aa4: Status 404 returned error can't find the container with id 4f43235c58f69556a446cbc6ca3e30648a688de8141fcd69a4a5bd53ff429aa4 Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.310509 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-n2dzw"] Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.311551 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:03 crc kubenswrapper[4981]: E0227 18:47:03.311646 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.322403 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.324413 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb"} Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.324683 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.326645 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlwn_0918866b-8c49-4332-bb4d-bea02b35f047/ovnkube-controller/1.log" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.327377 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlwn_0918866b-8c49-4332-bb4d-bea02b35f047/ovnkube-controller/0.log" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.330408 4981 generic.go:334] "Generic (PLEG): container finished" podID="0918866b-8c49-4332-bb4d-bea02b35f047" containerID="bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e" exitCode=1 Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.330503 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerDied","Data":"bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e"} Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.330595 4981 scope.go:117] "RemoveContainer" containerID="08c00d12667b939a0a72f9df56f2e8294bdb934b81fe13c07c0d6f83b6045c54" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.331192 4981 scope.go:117] "RemoveContainer" containerID="bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e" Feb 27 18:47:03 crc kubenswrapper[4981]: E0227 18:47:03.331401 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlwn_openshift-ovn-kubernetes(0918866b-8c49-4332-bb4d-bea02b35f047)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.332382 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" event={"ID":"252b7f50-0e8b-4b8e-b165-79233ca02bf3","Type":"ContainerStarted","Data":"65763c56cd317f0c30b92a96207c6f78c4c511f2575025b69c27bfe618bfb801"} Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.332430 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" event={"ID":"252b7f50-0e8b-4b8e-b165-79233ca02bf3","Type":"ContainerStarted","Data":"4f43235c58f69556a446cbc6ca3e30648a688de8141fcd69a4a5bd53ff429aa4"} Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.341504 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.357512 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs\") pod \"network-metrics-daemon-n2dzw\" (UID: \"f11688f5-7d6e-4931-88e5-31a5183eb6f3\") " pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.357574 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vg7q\" (UniqueName: \"kubernetes.io/projected/f11688f5-7d6e-4931-88e5-31a5183eb6f3-kube-api-access-5vg7q\") pod \"network-metrics-daemon-n2dzw\" (UID: \"f11688f5-7d6e-4931-88e5-31a5183eb6f3\") " pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.367035 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://912de196edcc2cd9b4e8b52e0e65b3a51c2269ad31c6f21a09fafac4af4ad6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.395109 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08c00d12667b939a0a72f9df56f2e8294bdb934b81fe13c07c0d6f83b6045c54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\" 8\\\\nI0227 18:47:00.315038 6863 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 18:47:00.315044 6863 handler.go:208] Removed *v1.Node event handler 7\\\\nI0227 18:47:00.315066 6863 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 18:47:00.315072 6863 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0227 18:47:00.315319 6863 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.315816 6863 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316143 6863 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316247 6863 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316527 6863 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316680 6863 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.318131 6863 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 18:47:00.318220 6863 factory.go:656] Stopping watch factory\\\\nI0227 18:47:00.318248 6863 ovnkube.go:599] Stopped ovnkube\\\\nI0227 18:47:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.423318 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.441775 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.458688 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs\") pod \"network-metrics-daemon-n2dzw\" (UID: \"f11688f5-7d6e-4931-88e5-31a5183eb6f3\") " pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.458795 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vg7q\" (UniqueName: \"kubernetes.io/projected/f11688f5-7d6e-4931-88e5-31a5183eb6f3-kube-api-access-5vg7q\") pod \"network-metrics-daemon-n2dzw\" (UID: \"f11688f5-7d6e-4931-88e5-31a5183eb6f3\") " pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:03 crc kubenswrapper[4981]: E0227 18:47:03.458928 4981 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 18:47:03 crc kubenswrapper[4981]: E0227 18:47:03.459090 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs podName:f11688f5-7d6e-4931-88e5-31a5183eb6f3 nodeName:}" failed. No retries permitted until 2026-02-27 18:47:03.959023723 +0000 UTC m=+123.437804914 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs") pod "network-metrics-daemon-n2dzw" (UID: "f11688f5-7d6e-4931-88e5-31a5183eb6f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.466396 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.485294 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.493680 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vg7q\" (UniqueName: \"kubernetes.io/projected/f11688f5-7d6e-4931-88e5-31a5183eb6f3-kube-api-access-5vg7q\") pod \"network-metrics-daemon-n2dzw\" (UID: \"f11688f5-7d6e-4931-88e5-31a5183eb6f3\") " pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.507202 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.526299 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78e1219b4a49e7b612e1b1ab3b6fe58f43c836ad2a06aa11813117ceef60e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.567372 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.589913 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.607517 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.627181 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.642690 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"252b7f50-0e8b-4b8e-b165-79233ca02bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8szb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.654570 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2dzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11688f5-7d6e-4931-88e5-31a5183eb6f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2dzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.672410 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.688864 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.705845 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.717636 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.731463 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.751625 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://912de196edcc2cd9b4e8b52e0e65b3a51c2269ad31c6f21a09fafac4af4ad6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.785043 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08c00d12667b939a0a72f9df56f2e8294bdb934b81fe13c07c0d6f83b6045c54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\" 8\\\\nI0227 18:47:00.315038 6863 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 18:47:00.315044 6863 handler.go:208] Removed *v1.Node event handler 7\\\\nI0227 18:47:00.315066 6863 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 18:47:00.315072 6863 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0227 18:47:00.315319 6863 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.315816 6863 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316143 6863 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316247 6863 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316527 6863 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316680 6863 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.318131 6863 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 18:47:00.318220 6863 factory.go:656] Stopping watch factory\\\\nI0227 18:47:00.318248 6863 ovnkube.go:599] Stopped ovnkube\\\\nI0227 18:47:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"message\\\":\\\"Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0227 18:47:02.905392 7099 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 18:47:02.905372 7099 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.802339 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.817871 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.835120 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.845904 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78e1219b4a49e7b612e1b1ab3b6fe58f43c836ad2a06aa11813117ceef60e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.866191 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.886937 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.903740 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.922313 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.943282 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.960349 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.964398 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs\") pod \"network-metrics-daemon-n2dzw\" (UID: \"f11688f5-7d6e-4931-88e5-31a5183eb6f3\") " pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:03 crc kubenswrapper[4981]: E0227 18:47:03.964592 4981 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 18:47:03 crc kubenswrapper[4981]: E0227 18:47:03.964694 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs podName:f11688f5-7d6e-4931-88e5-31a5183eb6f3 nodeName:}" failed. No retries permitted until 2026-02-27 18:47:04.964666746 +0000 UTC m=+124.443447936 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs") pod "network-metrics-daemon-n2dzw" (UID: "f11688f5-7d6e-4931-88e5-31a5183eb6f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.977845 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"252b7f50-0e8b-4b8e-b165-79233ca02bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8szb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:03 crc kubenswrapper[4981]: I0227 18:47:03.994436 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2dzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11688f5-7d6e-4931-88e5-31a5183eb6f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2dzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:03Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.338583 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" event={"ID":"252b7f50-0e8b-4b8e-b165-79233ca02bf3","Type":"ContainerStarted","Data":"a5b7c7d65b1617faae5147ede651ba5346a69d810d255d757035e1b045bbc600"} Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.342321 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlwn_0918866b-8c49-4332-bb4d-bea02b35f047/ovnkube-controller/1.log" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.347440 4981 scope.go:117] "RemoveContainer" containerID="bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e" Feb 27 18:47:04 crc kubenswrapper[4981]: E0227 18:47:04.347710 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlwn_openshift-ovn-kubernetes(0918866b-8c49-4332-bb4d-bea02b35f047)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.358932 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.381979 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://912de196edcc2cd9b4e8b52e0e65b3a51c2269ad31c6f21a09fafac4af4ad6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.413206 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08c00d12667b939a0a72f9df56f2e8294bdb934b81fe13c07c0d6f83b6045c54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:00Z\\\",\\\"message\\\":\\\" 8\\\\nI0227 18:47:00.315038 6863 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 18:47:00.315044 6863 handler.go:208] Removed *v1.Node event handler 7\\\\nI0227 18:47:00.315066 6863 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 18:47:00.315072 6863 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0227 18:47:00.315319 6863 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.315816 6863 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316143 6863 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316247 6863 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316527 6863 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.316680 6863 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 18:47:00.318131 6863 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 18:47:00.318220 6863 factory.go:656] Stopping watch factory\\\\nI0227 18:47:00.318248 6863 ovnkube.go:599] Stopped ovnkube\\\\nI0227 18:47:0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"message\\\":\\\"Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0227 18:47:02.905392 7099 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 18:47:02.905372 7099 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.432790 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.449681 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.476374 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.507914 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.529088 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.547841 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.568017 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.587414 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.605241 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.619929 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78e1219b4a49e7b612e1b1ab3b6fe58f43c836ad2a06aa11813117ceef60e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.627521 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:04 crc kubenswrapper[4981]: E0227 18:47:04.627674 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.628276 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.628279 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.628415 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:04 crc kubenswrapper[4981]: E0227 18:47:04.628563 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:04 crc kubenswrapper[4981]: E0227 18:47:04.628678 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:04 crc kubenswrapper[4981]: E0227 18:47:04.628818 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.637206 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"252b7f50-0e8b-4b8e-b165-79233ca02bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65763c56cd317f0c30b92a96207c6f78c4c511f2575025b69c27bfe618bfb801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c7d65b1617faae5147ede651ba5346a69d810d255d757035e1b045bbc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8szb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.640005 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.654778 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2dzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11688f5-7d6e-4931-88e5-31a5183eb6f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2dzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.674140 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.690477 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.708875 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.724980 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.742544 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.759778 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef8848e-8b7f-4a12-8436-6deee33371c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b78d032b71162dda8d49cc9cd3a6febb3da5326777dc926b9656919b41d320e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5e73486c7d159ec69ff6b39c2a2a93dfe4fa7cdf5b62bdfb91a6bb5d8b2e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094de35e827134fa5b9827035972708e7deb8d5ed21e43051209cd7da41af18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.776270 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://912de196edcc2cd9b4e8b52e0e65b3a51c2269ad31c6f21a09fafac4af4ad6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.801866 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"message\\\":\\\"Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0227 18:47:02.905392 7099 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 18:47:02.905372 7099 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:47:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlwn_openshift-ovn-kubernetes(0918866b-8c49-4332-bb4d-bea02b35f047)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.819485 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.833258 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.848425 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.867543 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.880626 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.900458 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78e1219b4a49e7b612e1b1ab3b6fe58f43c836ad2a06aa11813117ceef60e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.931718 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.953823 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.967635 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.974533 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs\") pod \"network-metrics-daemon-n2dzw\" (UID: \"f11688f5-7d6e-4931-88e5-31a5183eb6f3\") " pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:04 crc kubenswrapper[4981]: E0227 18:47:04.974679 4981 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 18:47:04 crc kubenswrapper[4981]: E0227 18:47:04.974736 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs podName:f11688f5-7d6e-4931-88e5-31a5183eb6f3 nodeName:}" failed. No retries permitted until 2026-02-27 18:47:06.974721236 +0000 UTC m=+126.453502406 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs") pod "network-metrics-daemon-n2dzw" (UID: "f11688f5-7d6e-4931-88e5-31a5183eb6f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.983505 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:04 crc kubenswrapper[4981]: I0227 18:47:04.996584 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"252b7f50-0e8b-4b8e-b165-79233ca02bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65763c56cd317f0c30b92a96207c6f78c4c511f2575025b69c27bfe618bfb801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c7d65b1617faae5147ede651ba5346a69d810d255d757035e1b045bbc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8szb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:04Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:05 crc kubenswrapper[4981]: I0227 18:47:05.014363 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2dzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11688f5-7d6e-4931-88e5-31a5183eb6f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2dzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:05Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:06 crc kubenswrapper[4981]: I0227 18:47:06.628476 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:06 crc kubenswrapper[4981]: I0227 18:47:06.628571 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:06 crc kubenswrapper[4981]: I0227 18:47:06.628593 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:06 crc kubenswrapper[4981]: E0227 18:47:06.628732 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:06 crc kubenswrapper[4981]: I0227 18:47:06.628764 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:06 crc kubenswrapper[4981]: E0227 18:47:06.628914 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:06 crc kubenswrapper[4981]: E0227 18:47:06.629165 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:06 crc kubenswrapper[4981]: E0227 18:47:06.629256 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:06 crc kubenswrapper[4981]: E0227 18:47:06.717416 4981 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 18:47:06 crc kubenswrapper[4981]: I0227 18:47:06.994369 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs\") pod \"network-metrics-daemon-n2dzw\" (UID: \"f11688f5-7d6e-4931-88e5-31a5183eb6f3\") " pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:06 crc kubenswrapper[4981]: E0227 18:47:06.994630 4981 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 18:47:06 crc kubenswrapper[4981]: E0227 18:47:06.994753 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs podName:f11688f5-7d6e-4931-88e5-31a5183eb6f3 nodeName:}" failed. No retries permitted until 2026-02-27 18:47:10.994717061 +0000 UTC m=+130.473498271 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs") pod "network-metrics-daemon-n2dzw" (UID: "f11688f5-7d6e-4931-88e5-31a5183eb6f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 18:47:08 crc kubenswrapper[4981]: I0227 18:47:08.409913 4981 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 27 18:47:08 crc kubenswrapper[4981]: I0227 18:47:08.628277 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:08 crc kubenswrapper[4981]: I0227 18:47:08.628315 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:08 crc kubenswrapper[4981]: I0227 18:47:08.628335 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:08 crc kubenswrapper[4981]: I0227 18:47:08.628368 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:08 crc kubenswrapper[4981]: E0227 18:47:08.628488 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:08 crc kubenswrapper[4981]: E0227 18:47:08.628639 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:08 crc kubenswrapper[4981]: E0227 18:47:08.628836 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:08 crc kubenswrapper[4981]: E0227 18:47:08.628904 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:10 crc kubenswrapper[4981]: I0227 18:47:10.627833 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:10 crc kubenswrapper[4981]: I0227 18:47:10.627901 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:10 crc kubenswrapper[4981]: E0227 18:47:10.629134 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:10 crc kubenswrapper[4981]: I0227 18:47:10.627961 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:10 crc kubenswrapper[4981]: E0227 18:47:10.629261 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:10 crc kubenswrapper[4981]: I0227 18:47:10.627944 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:10 crc kubenswrapper[4981]: E0227 18:47:10.629138 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:10 crc kubenswrapper[4981]: E0227 18:47:10.629356 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.045140 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs\") pod \"network-metrics-daemon-n2dzw\" (UID: \"f11688f5-7d6e-4931-88e5-31a5183eb6f3\") " pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:11 crc kubenswrapper[4981]: E0227 18:47:11.045431 4981 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 18:47:11 crc kubenswrapper[4981]: E0227 18:47:11.045859 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs podName:f11688f5-7d6e-4931-88e5-31a5183eb6f3 nodeName:}" failed. No retries permitted until 2026-02-27 18:47:19.045825614 +0000 UTC m=+138.524606804 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs") pod "network-metrics-daemon-n2dzw" (UID: "f11688f5-7d6e-4931-88e5-31a5183eb6f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.195158 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.195216 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.195234 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.195261 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.195279 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:11Z","lastTransitionTime":"2026-02-27T18:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:11 crc kubenswrapper[4981]: E0227 18:47:11.222403 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.227598 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.227795 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.227919 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.228101 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.228209 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:11Z","lastTransitionTime":"2026-02-27T18:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:11 crc kubenswrapper[4981]: E0227 18:47:11.248832 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.253222 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.253272 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.253290 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.253315 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.253333 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:11Z","lastTransitionTime":"2026-02-27T18:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:11 crc kubenswrapper[4981]: E0227 18:47:11.273598 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.278000 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.278108 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.278135 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.278163 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.278187 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:11Z","lastTransitionTime":"2026-02-27T18:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:11 crc kubenswrapper[4981]: E0227 18:47:11.297829 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.302608 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.302654 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.302672 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.302695 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.302713 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:11Z","lastTransitionTime":"2026-02-27T18:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:11 crc kubenswrapper[4981]: E0227 18:47:11.320959 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: E0227 18:47:11.321208 4981 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.647688 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.664413 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.679617 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.697535 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef8848e-8b7f-4a12-8436-6deee33371c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b78d032b71162dda8d49cc9cd3a6febb3da5326777dc926b9656919b41d320e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5e73486c7d159ec69ff6b39c2a2a93dfe4fa7cdf5b62bdfb91a6bb5d8b2e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094de35e827134fa5b9827035972708e7deb8d5ed21e43051209cd7da41af18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: E0227 18:47:11.718326 4981 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.721797 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://912de196edcc2cd9b4e8b52e0e65b3a51c2269ad31c6f21a09fafac4af4ad6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.751501 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"message\\\":\\\"Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0227 18:47:02.905392 7099 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 18:47:02.905372 7099 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:47:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlwn_openshift-ovn-kubernetes(0918866b-8c49-4332-bb4d-bea02b35f047)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.769537 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.786416 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.806883 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.829115 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.848123 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.864724 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.879036 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78e1219b4a49e7b612e1b1ab3b6fe58f43c836ad2a06aa11813117ceef60e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.913270 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.933960 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.952007 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.969325 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"252b7f50-0e8b-4b8e-b165-79233ca02bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65763c56cd317f0c30b92a96207c6f78c4c511f2575025b69c27bfe618bfb801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c7d65b1617faae5147ede651ba5346a69d810d255d757035e1b045bbc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8szb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:11 crc kubenswrapper[4981]: I0227 18:47:11.987298 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2dzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11688f5-7d6e-4931-88e5-31a5183eb6f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2dzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:11Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:12 crc kubenswrapper[4981]: I0227 18:47:12.628135 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:12 crc kubenswrapper[4981]: I0227 18:47:12.628168 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:12 crc kubenswrapper[4981]: I0227 18:47:12.628215 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:12 crc kubenswrapper[4981]: E0227 18:47:12.628347 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:12 crc kubenswrapper[4981]: I0227 18:47:12.628413 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:12 crc kubenswrapper[4981]: E0227 18:47:12.628637 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:12 crc kubenswrapper[4981]: E0227 18:47:12.628725 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:12 crc kubenswrapper[4981]: E0227 18:47:12.628922 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:14 crc kubenswrapper[4981]: I0227 18:47:14.627788 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:14 crc kubenswrapper[4981]: I0227 18:47:14.627833 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:14 crc kubenswrapper[4981]: I0227 18:47:14.627803 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:14 crc kubenswrapper[4981]: E0227 18:47:14.627965 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:14 crc kubenswrapper[4981]: E0227 18:47:14.628106 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:14 crc kubenswrapper[4981]: E0227 18:47:14.628347 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:14 crc kubenswrapper[4981]: I0227 18:47:14.629286 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:14 crc kubenswrapper[4981]: E0227 18:47:14.629491 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:16 crc kubenswrapper[4981]: I0227 18:47:16.627893 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:16 crc kubenswrapper[4981]: I0227 18:47:16.627942 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:16 crc kubenswrapper[4981]: I0227 18:47:16.628444 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:16 crc kubenswrapper[4981]: I0227 18:47:16.628567 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:16 crc kubenswrapper[4981]: E0227 18:47:16.628616 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:16 crc kubenswrapper[4981]: E0227 18:47:16.628753 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:16 crc kubenswrapper[4981]: E0227 18:47:16.628882 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:16 crc kubenswrapper[4981]: I0227 18:47:16.628904 4981 scope.go:117] "RemoveContainer" containerID="bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e" Feb 27 18:47:16 crc kubenswrapper[4981]: E0227 18:47:16.629034 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:16 crc kubenswrapper[4981]: E0227 18:47:16.719962 4981 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.406019 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlwn_0918866b-8c49-4332-bb4d-bea02b35f047/ovnkube-controller/1.log" Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.410235 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerStarted","Data":"f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0"} Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.410908 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.440246 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:17Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.492141 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:17Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.512567 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:17Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.527667 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef8848e-8b7f-4a12-8436-6deee33371c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b78d032b71162dda8d49cc9cd3a6febb3da5326777dc926b9656919b41d320e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5e73486c7d159ec69ff6b39c2a2a93dfe4fa7cdf5b62bdfb91a6bb5d8b2e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094de35e827134fa5b9827035972708e7deb8d5ed21e43051209cd7da41af18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:17Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.544471 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://912de196edcc2cd9b4e8b52e0e65b3a51c2269ad31c6f21a09fafac4af4ad6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:17Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.565455 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"message\\\":\\\"Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0227 18:47:02.905392 7099 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 18:47:02.905372 7099 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:47:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:17Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.580515 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:17Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.598022 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:17Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.615004 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:17Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.627397 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78e1219b4a49e7b612e1b1ab3b6fe58f43c836ad2a06aa11813117ceef60e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:17Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.649598 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:17Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.667854 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:17Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.683718 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:17Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.702595 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:17Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.716770 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:17Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.730771 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:17Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.743778 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"252b7f50-0e8b-4b8e-b165-79233ca02bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65763c56cd317f0c30b92a96207c6f78c4c511f2575025b69c27bfe618bfb801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c7d65b1617faae5147ede651ba5346a69d810d255d757035e1b045bbc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8szb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:17Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:17 crc kubenswrapper[4981]: I0227 18:47:17.755556 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2dzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11688f5-7d6e-4931-88e5-31a5183eb6f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2dzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:17Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.418003 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlwn_0918866b-8c49-4332-bb4d-bea02b35f047/ovnkube-controller/2.log" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.419460 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlwn_0918866b-8c49-4332-bb4d-bea02b35f047/ovnkube-controller/1.log" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.424080 4981 generic.go:334] "Generic (PLEG): container finished" podID="0918866b-8c49-4332-bb4d-bea02b35f047" containerID="f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0" exitCode=1 Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.424137 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerDied","Data":"f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0"} Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.424227 4981 scope.go:117] "RemoveContainer" containerID="bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.425921 4981 scope.go:117] "RemoveContainer" containerID="f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0" Feb 27 18:47:18 crc kubenswrapper[4981]: E0227 18:47:18.426430 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlwn_openshift-ovn-kubernetes(0918866b-8c49-4332-bb4d-bea02b35f047)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.446714 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:18Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.464361 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:18Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.480651 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78e1219b4a49e7b612e1b1ab3b6fe58f43c836ad2a06aa11813117ceef60e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:18Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.513672 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:18Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.537331 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:18Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.556862 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:18Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.575362 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:18Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.590811 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"252b7f50-0e8b-4b8e-b165-79233ca02bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65763c56cd317f0c30b92a96207c6f78c4c511f2575025b69c27bfe618bfb801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c7d65b1617faae5147ede651ba5346a69d810d255d757035e1b045bbc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8szb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:18Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.605224 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2dzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11688f5-7d6e-4931-88e5-31a5183eb6f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2dzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:18Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.623489 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:18Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.628440 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.628467 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.628582 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:18 crc kubenswrapper[4981]: E0227 18:47:18.628630 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:18 crc kubenswrapper[4981]: E0227 18:47:18.628743 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:18 crc kubenswrapper[4981]: E0227 18:47:18.628865 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.629349 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:18 crc kubenswrapper[4981]: E0227 18:47:18.629493 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.638044 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:18Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.641846 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.653725 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:18Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.669861 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef8848e-8b7f-4a12-8436-6deee33371c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b78d032b71162dda8d49cc9cd3a6febb3da5326777dc926b9656919b41d320e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5e73486c7d159ec69ff6b39c2a2a93dfe4fa7cdf5b62bdfb91a6bb5d8b2e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094de35e827134fa5b9827035972708e7deb8d5ed21e43051209cd7da41af18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:18Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.692456 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://912de196edcc2cd9b4e8b52e0e65b3a51c2269ad31c6f21a09fafac4af4ad6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:18Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.722804 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"message\\\":\\\"Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0227 18:47:02.905392 7099 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 18:47:02.905372 7099 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:47:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:17Z\\\",\\\"message\\\":\\\"onfig-operator/kube-rbac-proxy-crio-crc\\\\nI0227 18:47:17.887365 7356 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-ktw87 in node crc\\\\nI0227 18:47:17.887652 7356 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-5pm8g\\\\nI0227 18:47:17.887278 7356 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0227 18:47:17.887663 7356 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-5pm8g\\\\nF0227 18:47:17.887664 7356 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:18Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.740614 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:18Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.758573 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:18Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:18 crc kubenswrapper[4981]: I0227 18:47:18.778845 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:18Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.090388 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.106250 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.122041 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.131695 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs\") pod \"network-metrics-daemon-n2dzw\" (UID: \"f11688f5-7d6e-4931-88e5-31a5183eb6f3\") " pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:19 crc kubenswrapper[4981]: E0227 18:47:19.131877 4981 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 18:47:19 crc kubenswrapper[4981]: E0227 18:47:19.132267 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs podName:f11688f5-7d6e-4931-88e5-31a5183eb6f3 nodeName:}" failed. No retries permitted until 2026-02-27 18:47:35.132239206 +0000 UTC m=+154.611020406 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs") pod "network-metrics-daemon-n2dzw" (UID: "f11688f5-7d6e-4931-88e5-31a5183eb6f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.138280 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.154990 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef8848e-8b7f-4a12-8436-6deee33371c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b78d032b71162dda8d49cc9cd3a6febb3da5326777dc926b9656919b41d320e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5e73486c7d159ec69ff6b39c2a2a93dfe4fa7cdf5b62bdfb91a6bb5d8b2e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094de35e827134fa5b9827035972708e7deb8d5ed21e43051209cd7da41af18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.175642 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://912de196edcc2cd9b4e8b52e0e65b3a51c2269ad31c6f21a09fafac4af4ad6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.199087 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb60771763fa29572511b4fb63664d120451f36cf7498216be89fcfa0577490e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"message\\\":\\\"Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0227 18:47:02.905392 7099 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 18:47:02.905372 7099 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:47:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:17Z\\\",\\\"message\\\":\\\"onfig-operator/kube-rbac-proxy-crio-crc\\\\nI0227 18:47:17.887365 7356 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-ktw87 in node crc\\\\nI0227 18:47:17.887652 7356 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-5pm8g\\\\nI0227 18:47:17.887278 7356 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0227 18:47:17.887663 7356 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-5pm8g\\\\nF0227 18:47:17.887664 7356 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.217245 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.233548 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.252736 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.287646 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.324047 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.340542 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78e1219b4a49e7b612e1b1ab3b6fe58f43c836ad2a06aa11813117ceef60e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.366987 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.383251 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.403172 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b9cd39-0166-4ad5-aae4-62595ce987ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33ee2f6a8a6b33109ce0e0c8edea9d353fcadcb447dab73fc0763ca9f484a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e33e23de51fc20ecaa8ed6d4c7f561de316c8b2adf3f2eb94b0f4284c0ae982a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:45:34Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 18:45:03.927659 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 18:45:03.935920 1 observer_polling.go:159] Starting file observer\\\\nI0227 18:45:04.003231 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 18:45:04.010284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0227 18:45:34.205850 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:33Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47da708cea4fc8dee9c3d6ac4bb7473cdc255bfed85666ddf72c1b49d93d94ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc26ccdc87598f2980e1dc35395d3846450da6de6c1818de589d95441798e232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f5edf9f42b84e2bf4ad4c6a88f9b2a14c248fb3fc1f821c9720d310d5d4a28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.425282 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.431499 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlwn_0918866b-8c49-4332-bb4d-bea02b35f047/ovnkube-controller/2.log" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.436238 4981 scope.go:117] "RemoveContainer" containerID="f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0" Feb 27 18:47:19 crc kubenswrapper[4981]: E0227 18:47:19.436493 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlwn_openshift-ovn-kubernetes(0918866b-8c49-4332-bb4d-bea02b35f047)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.445927 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.470671 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"252b7f50-0e8b-4b8e-b165-79233ca02bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65763c56cd317f0c30b92a96207c6f78c4c511f2575025b69c27bfe618bfb801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c7d65b1617faae5147ede651ba5346a69d810d255d757035e1b045bbc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8szb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.488549 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2dzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11688f5-7d6e-4931-88e5-31a5183eb6f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2dzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.505766 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.525772 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.544767 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.559005 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.571471 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78e1219b4a49e7b612e1b1ab3b6fe58f43c836ad2a06aa11813117ceef60e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.603941 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.626243 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.647431 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b9cd39-0166-4ad5-aae4-62595ce987ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33ee2f6a8a6b33109ce0e0c8edea9d353fcadcb447dab73fc0763ca9f484a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e33e23de51fc20ecaa8ed6d4c7f561de316c8b2adf3f2eb94b0f4284c0ae982a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:45:34Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 18:45:03.927659 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 18:45:03.935920 1 observer_polling.go:159] Starting file observer\\\\nI0227 18:45:04.003231 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 18:45:04.010284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0227 18:45:34.205850 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:33Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47da708cea4fc8dee9c3d6ac4bb7473cdc255bfed85666ddf72c1b49d93d94ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc26ccdc87598f2980e1dc35395d3846450da6de6c1818de589d95441798e232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f5edf9f42b84e2bf4ad4c6a88f9b2a14c248fb3fc1f821c9720d310d5d4a28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.666289 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"252b7f50-0e8b-4b8e-b165-79233ca02bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65763c56cd317f0c30b92a96207c6f78c4c511f2575025b69c27bfe618bfb801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c7d65b1617faae5147ede651ba5346a69d810d255d757035e1b045bbc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8szb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.682663 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2dzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11688f5-7d6e-4931-88e5-31a5183eb6f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2dzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.700477 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.715575 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.744804 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:17Z\\\",\\\"message\\\":\\\"onfig-operator/kube-rbac-proxy-crio-crc\\\\nI0227 18:47:17.887365 7356 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-ktw87 in node crc\\\\nI0227 18:47:17.887652 7356 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-5pm8g\\\\nI0227 18:47:17.887278 7356 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0227 18:47:17.887663 7356 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-5pm8g\\\\nF0227 18:47:17.887664 7356 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:47:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlwn_openshift-ovn-kubernetes(0918866b-8c49-4332-bb4d-bea02b35f047)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.760647 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.778677 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef8848e-8b7f-4a12-8436-6deee33371c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b78d032b71162dda8d49cc9cd3a6febb3da5326777dc926b9656919b41d320e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5e73486c7d159ec69ff6b39c2a2a93dfe4fa7cdf5b62bdfb91a6bb5d8b2e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094de35e827134fa5b9827035972708e7deb8d5ed21e43051209cd7da41af18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.801041 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://912de196edcc2cd9b4e8b52e0e65b3a51c2269ad31c6f21a09fafac4af4ad6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.819517 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.836384 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:19 crc kubenswrapper[4981]: I0227 18:47:19.856032 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:19Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:20 crc kubenswrapper[4981]: I0227 18:47:20.628326 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:20 crc kubenswrapper[4981]: I0227 18:47:20.628467 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:20 crc kubenswrapper[4981]: I0227 18:47:20.628503 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:20 crc kubenswrapper[4981]: I0227 18:47:20.628336 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:20 crc kubenswrapper[4981]: E0227 18:47:20.628592 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:20 crc kubenswrapper[4981]: E0227 18:47:20.628847 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:20 crc kubenswrapper[4981]: E0227 18:47:20.629285 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:20 crc kubenswrapper[4981]: E0227 18:47:20.629441 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.336212 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.336267 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.336284 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.336308 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.336326 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:21Z","lastTransitionTime":"2026-02-27T18:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:21 crc kubenswrapper[4981]: E0227 18:47:21.356962 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.362521 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.362582 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.362609 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.362641 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.362668 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:21Z","lastTransitionTime":"2026-02-27T18:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:21 crc kubenswrapper[4981]: E0227 18:47:21.382351 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.387161 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.387220 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.387242 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.387269 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.387291 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:21Z","lastTransitionTime":"2026-02-27T18:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:21 crc kubenswrapper[4981]: E0227 18:47:21.407503 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.413450 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.413501 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.413518 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.413541 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.413558 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:21Z","lastTransitionTime":"2026-02-27T18:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:21 crc kubenswrapper[4981]: E0227 18:47:21.435639 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.440906 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.440958 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.440979 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.441004 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.441023 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:21Z","lastTransitionTime":"2026-02-27T18:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:21 crc kubenswrapper[4981]: E0227 18:47:21.460325 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: E0227 18:47:21.460541 4981 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.650527 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.671497 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.690186 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.704246 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: E0227 18:47:21.721719 4981 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.722430 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78e1219b4a49e7b612e1b1ab3b6fe58f43c836ad2a06aa11813117ceef60e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.754211 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.771520 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.790430 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b9cd39-0166-4ad5-aae4-62595ce987ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33ee2f6a8a6b33109ce0e0c8edea9d353fcadcb447dab73fc0763ca9f484a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e33e23de51fc20ecaa8ed6d4c7f561de316c8b2adf3f2eb94b0f4284c0ae982a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:45:34Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 18:45:03.927659 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 18:45:03.935920 1 observer_polling.go:159] Starting file observer\\\\nI0227 18:45:04.003231 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 18:45:04.010284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0227 18:45:34.205850 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:33Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47da708cea4fc8dee9c3d6ac4bb7473cdc255bfed85666ddf72c1b49d93d94ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc26ccdc87598f2980e1dc35395d3846450da6de6c1818de589d95441798e232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f5edf9f42b84e2bf4ad4c6a88f9b2a14c248fb3fc1f821c9720d310d5d4a28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.806741 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"252b7f50-0e8b-4b8e-b165-79233ca02bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65763c56cd317f0c30b92a96207c6f78c4c511f2575025b69c27bfe618bfb801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c7d65b1617faae5147ede651ba5346a69d810d255d757035e1b045bbc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8szb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.822886 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2dzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11688f5-7d6e-4931-88e5-31a5183eb6f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2dzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.842467 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.858522 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.896526 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:17Z\\\",\\\"message\\\":\\\"onfig-operator/kube-rbac-proxy-crio-crc\\\\nI0227 18:47:17.887365 7356 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-ktw87 in node crc\\\\nI0227 18:47:17.887652 7356 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-5pm8g\\\\nI0227 18:47:17.887278 7356 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0227 18:47:17.887663 7356 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-5pm8g\\\\nF0227 18:47:17.887664 7356 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:47:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlwn_openshift-ovn-kubernetes(0918866b-8c49-4332-bb4d-bea02b35f047)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.911798 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.929485 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef8848e-8b7f-4a12-8436-6deee33371c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b78d032b71162dda8d49cc9cd3a6febb3da5326777dc926b9656919b41d320e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5e73486c7d159ec69ff6b39c2a2a93dfe4fa7cdf5b62bdfb91a6bb5d8b2e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094de35e827134fa5b9827035972708e7deb8d5ed21e43051209cd7da41af18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.950151 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://912de196edcc2cd9b4e8b52e0e65b3a51c2269ad31c6f21a09fafac4af4ad6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.967452 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:21 crc kubenswrapper[4981]: I0227 18:47:21.984131 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:22 crc kubenswrapper[4981]: I0227 18:47:22.000716 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:21Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:22 crc kubenswrapper[4981]: I0227 18:47:22.627537 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:22 crc kubenswrapper[4981]: I0227 18:47:22.627604 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:22 crc kubenswrapper[4981]: I0227 18:47:22.627637 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:22 crc kubenswrapper[4981]: E0227 18:47:22.627831 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:22 crc kubenswrapper[4981]: I0227 18:47:22.627891 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:22 crc kubenswrapper[4981]: E0227 18:47:22.628459 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:22 crc kubenswrapper[4981]: E0227 18:47:22.628359 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:22 crc kubenswrapper[4981]: E0227 18:47:22.628546 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:24 crc kubenswrapper[4981]: I0227 18:47:24.628443 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:24 crc kubenswrapper[4981]: I0227 18:47:24.628516 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:24 crc kubenswrapper[4981]: I0227 18:47:24.628516 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:24 crc kubenswrapper[4981]: I0227 18:47:24.628454 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:24 crc kubenswrapper[4981]: E0227 18:47:24.628835 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:24 crc kubenswrapper[4981]: E0227 18:47:24.628938 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:24 crc kubenswrapper[4981]: E0227 18:47:24.628733 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:24 crc kubenswrapper[4981]: E0227 18:47:24.629186 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:26 crc kubenswrapper[4981]: I0227 18:47:26.628375 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:26 crc kubenswrapper[4981]: I0227 18:47:26.628431 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:26 crc kubenswrapper[4981]: I0227 18:47:26.628376 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:26 crc kubenswrapper[4981]: E0227 18:47:26.628552 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:26 crc kubenswrapper[4981]: E0227 18:47:26.628704 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:26 crc kubenswrapper[4981]: E0227 18:47:26.628852 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:26 crc kubenswrapper[4981]: I0227 18:47:26.629695 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:26 crc kubenswrapper[4981]: E0227 18:47:26.629922 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:26 crc kubenswrapper[4981]: E0227 18:47:26.723199 4981 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 18:47:28 crc kubenswrapper[4981]: I0227 18:47:28.628345 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:28 crc kubenswrapper[4981]: I0227 18:47:28.628400 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:28 crc kubenswrapper[4981]: I0227 18:47:28.628432 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:28 crc kubenswrapper[4981]: E0227 18:47:28.628555 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:28 crc kubenswrapper[4981]: I0227 18:47:28.628617 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:28 crc kubenswrapper[4981]: E0227 18:47:28.628788 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:28 crc kubenswrapper[4981]: E0227 18:47:28.629473 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:28 crc kubenswrapper[4981]: E0227 18:47:28.629560 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:30 crc kubenswrapper[4981]: I0227 18:47:30.627955 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:30 crc kubenswrapper[4981]: I0227 18:47:30.627966 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:30 crc kubenswrapper[4981]: E0227 18:47:30.628168 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:30 crc kubenswrapper[4981]: I0227 18:47:30.628107 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:30 crc kubenswrapper[4981]: I0227 18:47:30.628344 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:30 crc kubenswrapper[4981]: E0227 18:47:30.628331 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:30 crc kubenswrapper[4981]: E0227 18:47:30.628460 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:30 crc kubenswrapper[4981]: E0227 18:47:30.628592 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.650194 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.666939 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78e1219b4a49e7b612e1b1ab3b6fe58f43c836ad2a06aa11813117ceef60e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.701274 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.701336 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.701354 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.701379 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.701398 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:31Z","lastTransitionTime":"2026-02-27T18:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.702452 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: E0227 18:47:31.723741 4981 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 18:47:31 crc kubenswrapper[4981]: E0227 18:47:31.723090 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.726748 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.729782 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.729823 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.729841 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.729864 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.729882 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:31Z","lastTransitionTime":"2026-02-27T18:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.747233 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b9cd39-0166-4ad5-aae4-62595ce987ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33ee2f6a8a6b33109ce0e0c8edea9d353fcadcb447dab73fc0763ca9f484a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e33e23de51fc20ecaa8ed6d4c7f561de316c8b2adf3f2eb94b0f4284c0ae982a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:45:34Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 18:45:03.927659 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 18:45:03.935920 1 observer_polling.go:159] Starting file observer\\\\nI0227 18:45:04.003231 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 18:45:04.010284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0227 18:45:34.205850 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:33Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47da708cea4fc8dee9c3d6ac4bb7473cdc255bfed85666ddf72c1b49d93d94ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc26ccdc87598f2980e1dc35395d3846450da6de6c1818de589d95441798e232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f5edf9f42b84e2bf4ad4c6a88f9b2a14c248fb3fc1f821c9720d310d5d4a28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: E0227 18:47:31.752298 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.759602 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.759659 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.759677 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.759700 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.759721 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:31Z","lastTransitionTime":"2026-02-27T18:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.767171 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: E0227 18:47:31.778997 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.784347 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.784440 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.784460 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.784484 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.784502 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:31Z","lastTransitionTime":"2026-02-27T18:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.787743 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: E0227 18:47:31.802896 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.806962 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.807002 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.807011 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.807027 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.807037 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:31Z","lastTransitionTime":"2026-02-27T18:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.807347 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: E0227 18:47:31.825006 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1b99e48b-f223-4d99-b29b-1960f0d38aec\\\",\\\"systemUUID\\\":\\\"adfb44cb-eacb-4bdb-ac3c-af6421f66947\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: E0227 18:47:31.825270 4981 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.828930 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"252b7f50-0e8b-4b8e-b165-79233ca02bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65763c56cd317f0c30b92a96207c6f78c4c511f2575025b69c27bfe618bfb801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c7d65b1617faae5147ede651ba5346a69d810d255d757035e1b045bbc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8szb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.845292 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2dzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11688f5-7d6e-4931-88e5-31a5183eb6f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2dzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.862951 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.879266 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.894150 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.911084 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef8848e-8b7f-4a12-8436-6deee33371c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b78d032b71162dda8d49cc9cd3a6febb3da5326777dc926b9656919b41d320e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5e73486c7d159ec69ff6b39c2a2a93dfe4fa7cdf5b62bdfb91a6bb5d8b2e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094de35e827134fa5b9827035972708e7deb8d5ed21e43051209cd7da41af18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.932642 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://912de196edcc2cd9b4e8b52e0e65b3a51c2269ad31c6f21a09fafac4af4ad6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.963375 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:17Z\\\",\\\"message\\\":\\\"onfig-operator/kube-rbac-proxy-crio-crc\\\\nI0227 18:47:17.887365 7356 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-ktw87 in node crc\\\\nI0227 18:47:17.887652 7356 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-5pm8g\\\\nI0227 18:47:17.887278 7356 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0227 18:47:17.887663 7356 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-5pm8g\\\\nF0227 18:47:17.887664 7356 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:47:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlwn_openshift-ovn-kubernetes(0918866b-8c49-4332-bb4d-bea02b35f047)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.981902 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:31 crc kubenswrapper[4981]: I0227 18:47:31.998880 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:31Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:32 crc kubenswrapper[4981]: I0227 18:47:32.018704 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:32Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:32 crc kubenswrapper[4981]: I0227 18:47:32.494352 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:32 crc kubenswrapper[4981]: I0227 18:47:32.494491 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:32 crc kubenswrapper[4981]: E0227 18:47:32.494588 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:48:36.494537663 +0000 UTC m=+215.973318863 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:32 crc kubenswrapper[4981]: E0227 18:47:32.494671 4981 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 18:47:32 crc kubenswrapper[4981]: E0227 18:47:32.494784 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 18:48:36.49475675 +0000 UTC m=+215.973537940 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 18:47:32 crc kubenswrapper[4981]: I0227 18:47:32.596165 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:32 crc kubenswrapper[4981]: I0227 18:47:32.596278 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:32 crc kubenswrapper[4981]: I0227 18:47:32.596352 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:32 crc kubenswrapper[4981]: E0227 18:47:32.596483 4981 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 18:47:32 crc kubenswrapper[4981]: E0227 18:47:32.596487 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 18:47:32 crc kubenswrapper[4981]: E0227 18:47:32.596532 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 18:47:32 crc kubenswrapper[4981]: E0227 18:47:32.596550 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 18:48:36.596532993 +0000 UTC m=+216.075314183 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 18:47:32 crc kubenswrapper[4981]: E0227 18:47:32.596552 4981 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:47:32 crc kubenswrapper[4981]: E0227 18:47:32.596635 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 18:48:36.596611445 +0000 UTC m=+216.075392635 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:47:32 crc kubenswrapper[4981]: E0227 18:47:32.596741 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 18:47:32 crc kubenswrapper[4981]: E0227 18:47:32.596769 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 18:47:32 crc kubenswrapper[4981]: E0227 18:47:32.596790 4981 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:47:32 crc kubenswrapper[4981]: E0227 18:47:32.596851 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 18:48:36.596829732 +0000 UTC m=+216.075610922 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 18:47:32 crc kubenswrapper[4981]: I0227 18:47:32.627912 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:32 crc kubenswrapper[4981]: I0227 18:47:32.627996 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:32 crc kubenswrapper[4981]: E0227 18:47:32.628216 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:32 crc kubenswrapper[4981]: I0227 18:47:32.628241 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:32 crc kubenswrapper[4981]: I0227 18:47:32.628368 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:32 crc kubenswrapper[4981]: E0227 18:47:32.628422 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:32 crc kubenswrapper[4981]: E0227 18:47:32.628914 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:32 crc kubenswrapper[4981]: E0227 18:47:32.629083 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:32 crc kubenswrapper[4981]: I0227 18:47:32.629424 4981 scope.go:117] "RemoveContainer" containerID="f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0" Feb 27 18:47:32 crc kubenswrapper[4981]: E0227 18:47:32.629674 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlwn_openshift-ovn-kubernetes(0918866b-8c49-4332-bb4d-bea02b35f047)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" Feb 27 18:47:34 crc kubenswrapper[4981]: I0227 18:47:34.628468 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:34 crc kubenswrapper[4981]: I0227 18:47:34.628529 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:34 crc kubenswrapper[4981]: I0227 18:47:34.628481 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:34 crc kubenswrapper[4981]: I0227 18:47:34.628613 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:34 crc kubenswrapper[4981]: E0227 18:47:34.628674 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:34 crc kubenswrapper[4981]: E0227 18:47:34.628782 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:34 crc kubenswrapper[4981]: E0227 18:47:34.628902 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:34 crc kubenswrapper[4981]: E0227 18:47:34.628950 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:35 crc kubenswrapper[4981]: I0227 18:47:35.223237 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs\") pod \"network-metrics-daemon-n2dzw\" (UID: \"f11688f5-7d6e-4931-88e5-31a5183eb6f3\") " pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:35 crc kubenswrapper[4981]: E0227 18:47:35.223436 4981 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 18:47:35 crc kubenswrapper[4981]: E0227 18:47:35.223847 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs podName:f11688f5-7d6e-4931-88e5-31a5183eb6f3 nodeName:}" failed. No retries permitted until 2026-02-27 18:48:07.223813431 +0000 UTC m=+186.702594631 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs") pod "network-metrics-daemon-n2dzw" (UID: "f11688f5-7d6e-4931-88e5-31a5183eb6f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 18:47:36 crc kubenswrapper[4981]: I0227 18:47:36.628389 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:36 crc kubenswrapper[4981]: I0227 18:47:36.628474 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:36 crc kubenswrapper[4981]: E0227 18:47:36.628573 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:36 crc kubenswrapper[4981]: I0227 18:47:36.628616 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:36 crc kubenswrapper[4981]: E0227 18:47:36.628801 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:36 crc kubenswrapper[4981]: E0227 18:47:36.628865 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:36 crc kubenswrapper[4981]: I0227 18:47:36.628412 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:36 crc kubenswrapper[4981]: E0227 18:47:36.629216 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:36 crc kubenswrapper[4981]: E0227 18:47:36.724744 4981 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.512929 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-992xv_2f03f89e-d428-4246-a710-23c47810b60e/kube-multus/0.log" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.513006 4981 generic.go:334] "Generic (PLEG): container finished" podID="2f03f89e-d428-4246-a710-23c47810b60e" containerID="624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c" exitCode=1 Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.513048 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-992xv" event={"ID":"2f03f89e-d428-4246-a710-23c47810b60e","Type":"ContainerDied","Data":"624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c"} Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.513639 4981 scope.go:117] "RemoveContainer" containerID="624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.550182 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:37Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.572236 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:37Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.592874 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b9cd39-0166-4ad5-aae4-62595ce987ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33ee2f6a8a6b33109ce0e0c8edea9d353fcadcb447dab73fc0763ca9f484a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e33e23de51fc20ecaa8ed6d4c7f561de316c8b2adf3f2eb94b0f4284c0ae982a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:45:34Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 18:45:03.927659 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 18:45:03.935920 1 observer_polling.go:159] Starting file observer\\\\nI0227 18:45:04.003231 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 18:45:04.010284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0227 18:45:34.205850 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:33Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47da708cea4fc8dee9c3d6ac4bb7473cdc255bfed85666ddf72c1b49d93d94ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc26ccdc87598f2980e1dc35395d3846450da6de6c1818de589d95441798e232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f5edf9f42b84e2bf4ad4c6a88f9b2a14c248fb3fc1f821c9720d310d5d4a28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:37Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.613306 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:37Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.636475 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3bdbe5f2479348a5b45f36264c9672d2946ccfefc564712af8a4b27a7dc5335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:37Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.659120 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://788b841ec6031d54dc709b1c989be1ad082cb4388622a5ab0ff301265e834454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1953c5004b66db2ca93e039dae70e74e7cfb23c682cff15f8f092b7cdb1a2490\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:37Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.676301 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fefdc04-8285-4630-83d3-494dcc0216f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e16c0f52eabdce6a7e78719a5ec3d82a8ce95a2997db6c49b49f6a5dca3ab8be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rtcfx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5pm8g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:37Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.691233 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wcwdj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f91b4bf-a71e-44b8-95aa-fa8c0439c2e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78e1219b4a49e7b612e1b1ab3b6fe58f43c836ad2a06aa11813117ceef60e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twhd9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wcwdj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:37Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.710395 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"252b7f50-0e8b-4b8e-b165-79233ca02bf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65763c56cd317f0c30b92a96207c6f78c4c511f2575025b69c27bfe618bfb801\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b7c7d65b1617faae5147ede651ba5346a69d810d255d757035e1b045bbc600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5krs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8szb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:37Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.728358 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n2dzw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11688f5-7d6e-4931-88e5-31a5183eb6f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5vg7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n2dzw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:37Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.747352 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:37Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.764482 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fxkmm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64a9ab98-e01f-4125-8d91-49fb385b1e6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6487c3f83974f14c95b334e11b7a925cea985f20c99b1d78b48a212e43d87b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vcqj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fxkmm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:37Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.781009 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:37Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.799826 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef8848e-8b7f-4a12-8436-6deee33371c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b78d032b71162dda8d49cc9cd3a6febb3da5326777dc926b9656919b41d320e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5e73486c7d159ec69ff6b39c2a2a93dfe4fa7cdf5b62bdfb91a6bb5d8b2e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094de35e827134fa5b9827035972708e7deb8d5ed21e43051209cd7da41af18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:37Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.827366 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://912de196edcc2cd9b4e8b52e0e65b3a51c2269ad31c6f21a09fafac4af4ad6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:37Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.857116 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:17Z\\\",\\\"message\\\":\\\"onfig-operator/kube-rbac-proxy-crio-crc\\\\nI0227 18:47:17.887365 7356 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-ktw87 in node crc\\\\nI0227 18:47:17.887652 7356 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-5pm8g\\\\nI0227 18:47:17.887278 7356 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0227 18:47:17.887663 7356 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-5pm8g\\\\nF0227 18:47:17.887664 7356 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:47:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlwn_openshift-ovn-kubernetes(0918866b-8c49-4332-bb4d-bea02b35f047)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:37Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.877208 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:37Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.895693 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:37Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:37 crc kubenswrapper[4981]: I0227 18:47:37.917580 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:36Z\\\",\\\"message\\\":\\\"2026-02-27T18:46:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_143f85ac-7a5e-4734-8105-6d733e8cd818\\\\n2026-02-27T18:46:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_143f85ac-7a5e-4734-8105-6d733e8cd818 to /host/opt/cni/bin/\\\\n2026-02-27T18:46:51Z [verbose] multus-daemon started\\\\n2026-02-27T18:46:51Z [verbose] Readiness Indicator file check\\\\n2026-02-27T18:47:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:37Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:38 crc kubenswrapper[4981]: I0227 18:47:38.519023 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-992xv_2f03f89e-d428-4246-a710-23c47810b60e/kube-multus/0.log" Feb 27 18:47:38 crc kubenswrapper[4981]: I0227 18:47:38.519144 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-992xv" event={"ID":"2f03f89e-d428-4246-a710-23c47810b60e","Type":"ContainerStarted","Data":"511d08431fde2a2ed342b5d8c934dc8915b5ac7d408321b7898626b3ea30becf"} Feb 27 18:47:38 crc kubenswrapper[4981]: I0227 18:47:38.536963 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c280ca1a-1502-41cd-9082-fd65ab37e46a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11c70643320ac0fa34e5a0320565811799190a0de16fa27c2bf1619083a4bcfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8acda9cd806448fe4f0faad7fb66a567eece7a93c841a09d68f1df5275090973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:38Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:38 crc kubenswrapper[4981]: I0227 18:47:38.555922 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef8848e-8b7f-4a12-8436-6deee33371c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b78d032b71162dda8d49cc9cd3a6febb3da5326777dc926b9656919b41d320e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5e73486c7d159ec69ff6b39c2a2a93dfe4fa7cdf5b62bdfb91a6bb5d8b2e47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8094de35e827134fa5b9827035972708e7deb8d5ed21e43051209cd7da41af18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://273e5f0d5ff4cf6238bcbc9b79621756922942638298fb586b14ae71f1d95688\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:38Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:38 crc kubenswrapper[4981]: I0227 18:47:38.580954 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ktw87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec03bc00-a6c7-4dbe-9b9d-51bcd8187a6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://912de196edcc2cd9b4e8b52e0e65b3a51c2269ad31c6f21a09fafac4af4ad6c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7df8f222af18c486179d4837c24c8205ea8c4a94c276c0a36538fe8c45338349\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b8e8567416fffa57ca59184aaaa760a44865d898c572c3e0e1669ea1ec5a9c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a266b11b7e85bde0eac1f134455e16d924c63609cc9bcf687140f29b3a85a076\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edbfdf237f33ae3c44e0cbe6597e96bab1d80926b345d72fddc211458fe27805\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://445f0efa00293557616e2611f1695e208d003d3a5a420052bccb8953c5060c11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ded03e14174f163ac8422718c60a3f4a4cf214c44d2447f1f16c82df69c2c4a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g5nnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ktw87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:38Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:38 crc kubenswrapper[4981]: I0227 18:47:38.611783 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0918866b-8c49-4332-bb4d-bea02b35f047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:17Z\\\",\\\"message\\\":\\\"onfig-operator/kube-rbac-proxy-crio-crc\\\\nI0227 18:47:17.887365 7356 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-ktw87 in node crc\\\\nI0227 18:47:17.887652 7356 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-5pm8g\\\\nI0227 18:47:17.887278 7356 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0227 18:47:17.887663 7356 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-5pm8g\\\\nF0227 18:47:17.887664 7356 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:47:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6rlwn_openshift-ovn-kubernetes(0918866b-8c49-4332-bb4d-bea02b35f047)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lm8jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6rlwn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:38Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:38 crc kubenswrapper[4981]: I0227 18:47:38.628672 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:38 crc kubenswrapper[4981]: I0227 18:47:38.628702 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:38 crc kubenswrapper[4981]: I0227 18:47:38.628752 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:38 crc kubenswrapper[4981]: E0227 18:47:38.629094 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:38 crc kubenswrapper[4981]: E0227 18:47:38.629301 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:38 crc kubenswrapper[4981]: I0227 18:47:38.629405 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:38 crc kubenswrapper[4981]: E0227 18:47:38.629462 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:38 crc kubenswrapper[4981]: E0227 18:47:38.629622 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:38 crc kubenswrapper[4981]: I0227 18:47:38.631323 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:38Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:38 crc kubenswrapper[4981]: I0227 18:47:38.650496 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7b4604701e3452a4da7b78896dfd6f8693ddf0bf7cc637568a7c08396bd51d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:38Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:38 crc kubenswrapper[4981]: I0227 18:47:38.670449 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-992xv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f03f89e-d428-4246-a710-23c47810b60e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://511d08431fde2a2ed342b5d8c934dc8915b5ac7d408321b7898626b3ea30becf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T18:47:36Z\\\",\\\"message\\\":\\\"2026-02-27T18:46:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_143f85ac-7a5e-4734-8105-6d733e8cd818\\\\n2026-02-27T18:46:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_143f85ac-7a5e-4734-8105-6d733e8cd818 to /host/opt/cni/bin/\\\\n2026-02-27T18:46:51Z [verbose] multus-daemon started\\\\n2026-02-27T18:46:51Z [verbose] Readiness Indicator file check\\\\n2026-02-27T18:47:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hhf52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-992xv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:38Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:38 crc kubenswrapper[4981]: I0227 18:47:38.702019 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"675c9b60-0b59-4266-9a4c-e9ee2eb6ae15\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddaeb00c3eda3a0b337de6bbdbe528d54c8792e44ca560ae466e6cc36a37d1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://470189b5421464ce73433d12c2ac05a38f554001cd604feda73a572da75ce9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1ae25e7597906b64896d4276cf73935f991a60f0b48c54e26dcdf635d0cd381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da0dc89dd86e28bee3edcc416d4b49b284350d6cd85b9edc333085b8894dd9b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02771b16c78560c129c5c0b452d1129b2d251f9024912a9670176e9864d443e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4b5d80a61b402d674e71bccbe7e4666a4dba483c2ffcc1db71215ec3ff455\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39306aeb1deed2c4dfbd3f29101614799ee07a047c5b20f06804f9810eff1866\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae6d6a80cebffc85e34c6391d7993a4cda9769e8ccd65395fbfa95aab212cc0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:38Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:38 crc kubenswrapper[4981]: I0227 18:47:38.723438 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f0410cc-d6fa-4d09-b129-480c0d96f91a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:47:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:46:17Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 18:46:16.311703 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 18:46:16.311910 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 18:46:16.312885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3905405879/tls.crt::/tmp/serving-cert-3905405879/tls.key\\\\\\\"\\\\nI0227 18:46:17.028492 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 18:46:17.033392 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 18:46:17.033504 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 18:46:17.033588 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 18:46:17.033650 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 18:46:17.044803 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 18:46:17.044929 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045003 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 18:46:17.045124 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 18:46:17.045208 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 18:46:17.045271 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 18:46:17.045332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 18:46:17.044851 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 18:46:17.047454 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:46:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:04Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T18:45:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:38Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:38 crc kubenswrapper[4981]: I0227 18:47:38.743357 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b9cd39-0166-4ad5-aae4-62595ce987ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T18:45:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e33ee2f6a8a6b33109ce0e0c8edea9d353fcadcb447dab73fc0763ca9f484a25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e33e23de51fc20ecaa8ed6d4c7f561de316c8b2adf3f2eb94b0f4284c0ae982a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T18:45:34Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 18:45:03.927659 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 18:45:03.935920 1 observer_polling.go:159] Starting file observer\\\\nI0227 18:45:04.003231 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 18:45:04.010284 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0227 18:45:34.205850 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:45:33Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47da708cea4fc8dee9c3d6ac4bb7473cdc255bfed85666ddf72c1b49d93d94ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc26ccdc87598f2980e1dc35395d3846450da6de6c1818de589d95441798e232\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64f5edf9f42b84e2bf4ad4c6a88f9b2a14c248fb3fc1f821c9720d310d5d4a28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T18:45:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T18:45:01Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:38Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:38 crc kubenswrapper[4981]: I0227 18:47:38.764817 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T18:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T18:47:38Z is after 2025-08-24T17:21:41Z" Feb 27 18:47:38 crc kubenswrapper[4981]: I0227 18:47:38.858544 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podStartSLOduration=95.858511454 podStartE2EDuration="1m35.858511454s" podCreationTimestamp="2026-02-27 18:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:38.843882057 +0000 UTC m=+158.322663257" watchObservedRunningTime="2026-02-27 18:47:38.858511454 +0000 UTC m=+158.337292654" Feb 27 18:47:38 crc kubenswrapper[4981]: I0227 18:47:38.875657 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-wcwdj" podStartSLOduration=95.875628293 podStartE2EDuration="1m35.875628293s" podCreationTimestamp="2026-02-27 18:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:38.857956777 +0000 UTC m=+158.336737967" watchObservedRunningTime="2026-02-27 18:47:38.875628293 +0000 UTC m=+158.354409493" Feb 27 18:47:38 crc kubenswrapper[4981]: I0227 18:47:38.876204 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8szb9" podStartSLOduration=94.876189082 podStartE2EDuration="1m34.876189082s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:38.875748377 +0000 UTC m=+158.354529577" watchObservedRunningTime="2026-02-27 18:47:38.876189082 +0000 UTC m=+158.354970282" Feb 27 18:47:40 crc kubenswrapper[4981]: I0227 18:47:40.628156 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:40 crc kubenswrapper[4981]: I0227 18:47:40.628229 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:40 crc kubenswrapper[4981]: I0227 18:47:40.628178 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:40 crc kubenswrapper[4981]: E0227 18:47:40.628357 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:40 crc kubenswrapper[4981]: I0227 18:47:40.628411 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:40 crc kubenswrapper[4981]: E0227 18:47:40.628553 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:40 crc kubenswrapper[4981]: E0227 18:47:40.628767 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:40 crc kubenswrapper[4981]: E0227 18:47:40.628866 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.644342 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-fxkmm" podStartSLOduration=98.644313359 podStartE2EDuration="1m38.644313359s" podCreationTimestamp="2026-02-27 18:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:38.92819577 +0000 UTC m=+158.406976960" watchObservedRunningTime="2026-02-27 18:47:41.644313359 +0000 UTC m=+161.123094549" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.644982 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=68.644976641 podStartE2EDuration="1m8.644976641s" podCreationTimestamp="2026-02-27 18:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:41.643960187 +0000 UTC m=+161.122741387" watchObservedRunningTime="2026-02-27 18:47:41.644976641 +0000 UTC m=+161.123757801" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.661048 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=37.661032025 podStartE2EDuration="37.661032025s" podCreationTimestamp="2026-02-27 18:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:41.660826928 +0000 UTC m=+161.139608128" watchObservedRunningTime="2026-02-27 18:47:41.661032025 +0000 UTC m=+161.139813195" Feb 27 18:47:41 crc kubenswrapper[4981]: E0227 18:47:41.725961 4981 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.728549 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ktw87" podStartSLOduration=97.728517889 podStartE2EDuration="1m37.728517889s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:41.689245096 +0000 UTC m=+161.168026296" watchObservedRunningTime="2026-02-27 18:47:41.728517889 +0000 UTC m=+161.207299089" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.778500 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-992xv" podStartSLOduration=98.77847903 podStartE2EDuration="1m38.77847903s" podCreationTimestamp="2026-02-27 18:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:41.778381027 +0000 UTC m=+161.257162217" watchObservedRunningTime="2026-02-27 18:47:41.77847903 +0000 UTC m=+161.257260210" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.810576 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=57.810559038 podStartE2EDuration="57.810559038s" podCreationTimestamp="2026-02-27 18:46:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:41.810219537 +0000 UTC m=+161.289000697" watchObservedRunningTime="2026-02-27 18:47:41.810559038 +0000 UTC m=+161.289340208" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.829021 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=70.82900458 podStartE2EDuration="1m10.82900458s" podCreationTimestamp="2026-02-27 18:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:41.828820924 +0000 UTC m=+161.307602084" watchObservedRunningTime="2026-02-27 18:47:41.82900458 +0000 UTC m=+161.307785750" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.851002 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=23.850984308 podStartE2EDuration="23.850984308s" podCreationTimestamp="2026-02-27 18:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:41.850352767 +0000 UTC m=+161.329133937" watchObservedRunningTime="2026-02-27 18:47:41.850984308 +0000 UTC m=+161.329765468" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.858355 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.858610 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.858700 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.858789 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.858869 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T18:47:41Z","lastTransitionTime":"2026-02-27T18:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.909735 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2"] Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.911080 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.919294 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.919624 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.920205 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.920827 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.995539 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f05f616-2dae-41c1-a4bf-01db027de30f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-q2gl2\" (UID: \"8f05f616-2dae-41c1-a4bf-01db027de30f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.995595 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8f05f616-2dae-41c1-a4bf-01db027de30f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-q2gl2\" (UID: \"8f05f616-2dae-41c1-a4bf-01db027de30f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.995634 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8f05f616-2dae-41c1-a4bf-01db027de30f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-q2gl2\" (UID: \"8f05f616-2dae-41c1-a4bf-01db027de30f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.995657 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f05f616-2dae-41c1-a4bf-01db027de30f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-q2gl2\" (UID: \"8f05f616-2dae-41c1-a4bf-01db027de30f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2" Feb 27 18:47:41 crc kubenswrapper[4981]: I0227 18:47:41.995784 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f05f616-2dae-41c1-a4bf-01db027de30f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-q2gl2\" (UID: \"8f05f616-2dae-41c1-a4bf-01db027de30f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2" Feb 27 18:47:42 crc kubenswrapper[4981]: I0227 18:47:42.097145 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f05f616-2dae-41c1-a4bf-01db027de30f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-q2gl2\" (UID: \"8f05f616-2dae-41c1-a4bf-01db027de30f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2" Feb 27 18:47:42 crc kubenswrapper[4981]: I0227 18:47:42.097237 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f05f616-2dae-41c1-a4bf-01db027de30f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-q2gl2\" (UID: \"8f05f616-2dae-41c1-a4bf-01db027de30f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2" Feb 27 18:47:42 crc kubenswrapper[4981]: I0227 18:47:42.097288 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8f05f616-2dae-41c1-a4bf-01db027de30f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-q2gl2\" (UID: \"8f05f616-2dae-41c1-a4bf-01db027de30f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2" Feb 27 18:47:42 crc kubenswrapper[4981]: I0227 18:47:42.097350 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8f05f616-2dae-41c1-a4bf-01db027de30f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-q2gl2\" (UID: \"8f05f616-2dae-41c1-a4bf-01db027de30f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2" Feb 27 18:47:42 crc kubenswrapper[4981]: I0227 18:47:42.097397 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f05f616-2dae-41c1-a4bf-01db027de30f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-q2gl2\" (UID: \"8f05f616-2dae-41c1-a4bf-01db027de30f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2" Feb 27 18:47:42 crc kubenswrapper[4981]: I0227 18:47:42.097543 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8f05f616-2dae-41c1-a4bf-01db027de30f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-q2gl2\" (UID: \"8f05f616-2dae-41c1-a4bf-01db027de30f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2" Feb 27 18:47:42 crc kubenswrapper[4981]: I0227 18:47:42.097627 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8f05f616-2dae-41c1-a4bf-01db027de30f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-q2gl2\" (UID: \"8f05f616-2dae-41c1-a4bf-01db027de30f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2" Feb 27 18:47:42 crc kubenswrapper[4981]: I0227 18:47:42.098747 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f05f616-2dae-41c1-a4bf-01db027de30f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-q2gl2\" (UID: \"8f05f616-2dae-41c1-a4bf-01db027de30f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2" Feb 27 18:47:42 crc kubenswrapper[4981]: I0227 18:47:42.106722 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f05f616-2dae-41c1-a4bf-01db027de30f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-q2gl2\" (UID: \"8f05f616-2dae-41c1-a4bf-01db027de30f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2" Feb 27 18:47:42 crc kubenswrapper[4981]: I0227 18:47:42.126814 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f05f616-2dae-41c1-a4bf-01db027de30f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-q2gl2\" (UID: \"8f05f616-2dae-41c1-a4bf-01db027de30f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2" Feb 27 18:47:42 crc kubenswrapper[4981]: I0227 18:47:42.230273 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2" Feb 27 18:47:42 crc kubenswrapper[4981]: W0227 18:47:42.249951 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f05f616_2dae_41c1_a4bf_01db027de30f.slice/crio-dfc4c0b4704169e761ef1b475f2db0971cf2ab649b1c91d27cf2909dc8795c13 WatchSource:0}: Error finding container dfc4c0b4704169e761ef1b475f2db0971cf2ab649b1c91d27cf2909dc8795c13: Status 404 returned error can't find the container with id dfc4c0b4704169e761ef1b475f2db0971cf2ab649b1c91d27cf2909dc8795c13 Feb 27 18:47:42 crc kubenswrapper[4981]: I0227 18:47:42.535671 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2" event={"ID":"8f05f616-2dae-41c1-a4bf-01db027de30f","Type":"ContainerStarted","Data":"ca441b7d6c592c042184156d399b1f586556b9beb2084e2656721b7c739d67c1"} Feb 27 18:47:42 crc kubenswrapper[4981]: I0227 18:47:42.535748 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2" event={"ID":"8f05f616-2dae-41c1-a4bf-01db027de30f","Type":"ContainerStarted","Data":"dfc4c0b4704169e761ef1b475f2db0971cf2ab649b1c91d27cf2909dc8795c13"} Feb 27 18:47:42 crc kubenswrapper[4981]: I0227 18:47:42.555726 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-q2gl2" podStartSLOduration=99.555705899 podStartE2EDuration="1m39.555705899s" podCreationTimestamp="2026-02-27 18:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:42.553872339 +0000 UTC m=+162.032653539" watchObservedRunningTime="2026-02-27 18:47:42.555705899 +0000 UTC m=+162.034487099" Feb 27 18:47:42 crc kubenswrapper[4981]: I0227 18:47:42.627540 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:42 crc kubenswrapper[4981]: I0227 18:47:42.627565 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:42 crc kubenswrapper[4981]: E0227 18:47:42.627679 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:42 crc kubenswrapper[4981]: I0227 18:47:42.627723 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:42 crc kubenswrapper[4981]: I0227 18:47:42.627733 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:42 crc kubenswrapper[4981]: E0227 18:47:42.627864 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:42 crc kubenswrapper[4981]: E0227 18:47:42.627990 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:42 crc kubenswrapper[4981]: E0227 18:47:42.628238 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:42 crc kubenswrapper[4981]: I0227 18:47:42.649128 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 27 18:47:42 crc kubenswrapper[4981]: I0227 18:47:42.658921 4981 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 27 18:47:44 crc kubenswrapper[4981]: I0227 18:47:44.628172 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:44 crc kubenswrapper[4981]: I0227 18:47:44.628238 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:44 crc kubenswrapper[4981]: I0227 18:47:44.628242 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:44 crc kubenswrapper[4981]: I0227 18:47:44.628196 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:44 crc kubenswrapper[4981]: E0227 18:47:44.628433 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:44 crc kubenswrapper[4981]: E0227 18:47:44.628726 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:44 crc kubenswrapper[4981]: E0227 18:47:44.628962 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:44 crc kubenswrapper[4981]: E0227 18:47:44.629103 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:45 crc kubenswrapper[4981]: I0227 18:47:45.629609 4981 scope.go:117] "RemoveContainer" containerID="f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0" Feb 27 18:47:46 crc kubenswrapper[4981]: I0227 18:47:46.551367 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlwn_0918866b-8c49-4332-bb4d-bea02b35f047/ovnkube-controller/2.log" Feb 27 18:47:46 crc kubenswrapper[4981]: I0227 18:47:46.553653 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n2dzw"] Feb 27 18:47:46 crc kubenswrapper[4981]: I0227 18:47:46.553771 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:46 crc kubenswrapper[4981]: E0227 18:47:46.553863 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:46 crc kubenswrapper[4981]: I0227 18:47:46.554922 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerStarted","Data":"3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae"} Feb 27 18:47:46 crc kubenswrapper[4981]: I0227 18:47:46.555790 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:47:46 crc kubenswrapper[4981]: I0227 18:47:46.589971 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" podStartSLOduration=102.589953605 podStartE2EDuration="1m42.589953605s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:46.589599044 +0000 UTC m=+166.068380204" watchObservedRunningTime="2026-02-27 18:47:46.589953605 +0000 UTC m=+166.068734775" Feb 27 18:47:46 crc kubenswrapper[4981]: I0227 18:47:46.628509 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:46 crc kubenswrapper[4981]: I0227 18:47:46.628601 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:46 crc kubenswrapper[4981]: E0227 18:47:46.628647 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:46 crc kubenswrapper[4981]: E0227 18:47:46.628718 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:46 crc kubenswrapper[4981]: I0227 18:47:46.628593 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:46 crc kubenswrapper[4981]: E0227 18:47:46.628998 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:46 crc kubenswrapper[4981]: E0227 18:47:46.727164 4981 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 18:47:48 crc kubenswrapper[4981]: I0227 18:47:48.627976 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:48 crc kubenswrapper[4981]: I0227 18:47:48.628175 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:48 crc kubenswrapper[4981]: E0227 18:47:48.628212 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:48 crc kubenswrapper[4981]: I0227 18:47:48.628277 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:48 crc kubenswrapper[4981]: I0227 18:47:48.628287 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:48 crc kubenswrapper[4981]: E0227 18:47:48.628476 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:48 crc kubenswrapper[4981]: E0227 18:47:48.628673 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:48 crc kubenswrapper[4981]: E0227 18:47:48.628866 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:50 crc kubenswrapper[4981]: I0227 18:47:50.627934 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:50 crc kubenswrapper[4981]: I0227 18:47:50.628003 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:50 crc kubenswrapper[4981]: I0227 18:47:50.628048 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:50 crc kubenswrapper[4981]: E0227 18:47:50.628157 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 18:47:50 crc kubenswrapper[4981]: I0227 18:47:50.628236 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:50 crc kubenswrapper[4981]: E0227 18:47:50.628401 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 18:47:50 crc kubenswrapper[4981]: E0227 18:47:50.628523 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 18:47:50 crc kubenswrapper[4981]: E0227 18:47:50.628633 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n2dzw" podUID="f11688f5-7d6e-4931-88e5-31a5183eb6f3" Feb 27 18:47:50 crc kubenswrapper[4981]: I0227 18:47:50.684506 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.144332 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.206103 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rn6m5"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.206700 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.208122 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jdhvt"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.208893 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jdhvt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.209758 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v4vk8"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.210306 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.211785 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h885x"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.212393 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h885x" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.214469 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-xmtml"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.215126 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xmtml" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.219347 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.219644 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.220681 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.221933 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9h9p4"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.222618 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9h9p4" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.222820 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.228889 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.232595 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-h8fzp"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.233716 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.234193 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.234951 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7mfkz"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.236546 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.237171 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.237217 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.237288 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.237546 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.237583 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.237874 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.238686 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h8fzp" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.239017 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.254117 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7hs2"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.254534 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sfsnw"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.254658 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.254809 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.254872 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.254898 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sfsnw" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.254928 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-dllzn"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.255174 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.255192 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.255246 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.255269 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-7mfkz" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.255301 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.255407 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.255452 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.255412 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.255483 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.255519 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7hs2" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.255554 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.257145 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-t9grl"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.257513 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-t9grl" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.259256 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.259294 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.259306 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.259466 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.259501 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.259557 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.259571 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.259623 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.259659 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.259572 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.259732 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.259733 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.259777 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.259782 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.259866 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.259874 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.260163 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.260266 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.260356 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.260484 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.260625 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.260642 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.260775 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.260810 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.261365 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hx8gq"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.261842 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.262896 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.266039 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.266047 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.270244 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.271661 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.272040 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.272195 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.272254 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.272488 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.273077 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-922ln"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.273680 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-922ln" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.274356 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kmvhm"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.275393 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.277765 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.277764 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.277958 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-c4m77"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.278577 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5f96t"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.278992 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5f96t" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.279208 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-c4m77" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.279960 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.280203 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.283973 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.284473 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.296743 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.300467 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.300797 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g8dqb"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.302687 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5cs45"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.303293 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.304018 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.305152 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.306044 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5cs45" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.306882 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.311275 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.323187 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2vmct"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.323463 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-r858c"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.323730 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pcmgp"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.324080 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.324144 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.324184 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2vmct" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.324237 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.324669 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.326708 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/46dc2075-e24c-46cf-9885-3de46322461d-etcd-client\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.326750 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1107c99b-98a7-4103-9e6c-dde234daacaf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jdhvt\" (UID: \"1107c99b-98a7-4103-9e6c-dde234daacaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdhvt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.326778 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-serving-cert\") pod \"controller-manager-879f6c89f-rn6m5\" (UID: \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.326807 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.326833 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bbbe9d1b-2a55-4b34-b452-32f51eef3278-available-featuregates\") pod \"openshift-config-operator-7777fb866f-h8fzp\" (UID: \"bbbe9d1b-2a55-4b34-b452-32f51eef3278\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-h8fzp" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.326858 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1107c99b-98a7-4103-9e6c-dde234daacaf-config\") pod \"machine-api-operator-5694c8668f-jdhvt\" (UID: \"1107c99b-98a7-4103-9e6c-dde234daacaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdhvt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.326883 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bedbc35-5c52-4c25-a77b-dcbc4a5dbc21-config\") pod \"openshift-apiserver-operator-796bbdcf4f-h885x\" (UID: \"3bedbc35-5c52-4c25-a77b-dcbc4a5dbc21\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h885x" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.326908 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c616c83f-0616-4a1d-b2ac-69cdc88eef70-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9h9p4\" (UID: \"c616c83f-0616-4a1d-b2ac-69cdc88eef70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9h9p4" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.326933 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/46dc2075-e24c-46cf-9885-3de46322461d-audit-policies\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.326956 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46dc2075-e24c-46cf-9885-3de46322461d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.326978 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzdvc\" (UniqueName: \"kubernetes.io/projected/46dc2075-e24c-46cf-9885-3de46322461d-kube-api-access-hzdvc\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327002 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327039 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/46dc2075-e24c-46cf-9885-3de46322461d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327110 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rn6m5\" (UID: \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327136 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327160 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8071e59a-5e70-4dac-b36c-e17dd74f75b0-auth-proxy-config\") pod \"machine-approver-56656f9798-xmtml\" (UID: \"8071e59a-5e70-4dac-b36c-e17dd74f75b0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xmtml" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327188 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-config\") pod \"controller-manager-879f6c89f-rn6m5\" (UID: \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327212 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-audit-policies\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327234 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bedbc35-5c52-4c25-a77b-dcbc4a5dbc21-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-h885x\" (UID: \"3bedbc35-5c52-4c25-a77b-dcbc4a5dbc21\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h885x" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327258 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrlns\" (UniqueName: \"kubernetes.io/projected/1107c99b-98a7-4103-9e6c-dde234daacaf-kube-api-access-mrlns\") pod \"machine-api-operator-5694c8668f-jdhvt\" (UID: \"1107c99b-98a7-4103-9e6c-dde234daacaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdhvt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327281 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhw69\" (UniqueName: \"kubernetes.io/projected/bbbe9d1b-2a55-4b34-b452-32f51eef3278-kube-api-access-mhw69\") pod \"openshift-config-operator-7777fb866f-h8fzp\" (UID: \"bbbe9d1b-2a55-4b34-b452-32f51eef3278\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-h8fzp" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327305 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqr8k\" (UniqueName: \"kubernetes.io/projected/c616c83f-0616-4a1d-b2ac-69cdc88eef70-kube-api-access-wqr8k\") pod \"authentication-operator-69f744f599-9h9p4\" (UID: \"c616c83f-0616-4a1d-b2ac-69cdc88eef70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9h9p4" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327332 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf5mb\" (UniqueName: \"kubernetes.io/projected/86f8ab04-83b9-497b-a8b4-cde27e61d568-kube-api-access-nf5mb\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327355 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1107c99b-98a7-4103-9e6c-dde234daacaf-images\") pod \"machine-api-operator-5694c8668f-jdhvt\" (UID: \"1107c99b-98a7-4103-9e6c-dde234daacaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdhvt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327381 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327408 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46dc2075-e24c-46cf-9885-3de46322461d-audit-dir\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327429 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10e31a3f-eb88-4c8b-93e7-e251f762d29e-client-ca\") pod \"route-controller-manager-6576b87f9c-ns4jv\" (UID: \"10e31a3f-eb88-4c8b-93e7-e251f762d29e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327450 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327487 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327510 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8071e59a-5e70-4dac-b36c-e17dd74f75b0-config\") pod \"machine-approver-56656f9798-xmtml\" (UID: \"8071e59a-5e70-4dac-b36c-e17dd74f75b0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xmtml" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327530 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c616c83f-0616-4a1d-b2ac-69cdc88eef70-serving-cert\") pod \"authentication-operator-69f744f599-9h9p4\" (UID: \"c616c83f-0616-4a1d-b2ac-69cdc88eef70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9h9p4" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327551 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/46dc2075-e24c-46cf-9885-3de46322461d-encryption-config\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327573 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86f8ab04-83b9-497b-a8b4-cde27e61d568-audit-dir\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327594 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58j65\" (UniqueName: \"kubernetes.io/projected/3bedbc35-5c52-4c25-a77b-dcbc4a5dbc21-kube-api-access-58j65\") pod \"openshift-apiserver-operator-796bbdcf4f-h885x\" (UID: \"3bedbc35-5c52-4c25-a77b-dcbc4a5dbc21\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h885x" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327636 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327659 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8071e59a-5e70-4dac-b36c-e17dd74f75b0-machine-approver-tls\") pod \"machine-approver-56656f9798-xmtml\" (UID: \"8071e59a-5e70-4dac-b36c-e17dd74f75b0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xmtml" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327681 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10e31a3f-eb88-4c8b-93e7-e251f762d29e-config\") pod \"route-controller-manager-6576b87f9c-ns4jv\" (UID: \"10e31a3f-eb88-4c8b-93e7-e251f762d29e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327703 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v87jv\" (UniqueName: \"kubernetes.io/projected/8071e59a-5e70-4dac-b36c-e17dd74f75b0-kube-api-access-v87jv\") pod \"machine-approver-56656f9798-xmtml\" (UID: \"8071e59a-5e70-4dac-b36c-e17dd74f75b0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xmtml" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327723 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbbe9d1b-2a55-4b34-b452-32f51eef3278-serving-cert\") pod \"openshift-config-operator-7777fb866f-h8fzp\" (UID: \"bbbe9d1b-2a55-4b34-b452-32f51eef3278\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-h8fzp" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327746 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46dc2075-e24c-46cf-9885-3de46322461d-serving-cert\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327768 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdvf5\" (UniqueName: \"kubernetes.io/projected/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-kube-api-access-rdvf5\") pod \"controller-manager-879f6c89f-rn6m5\" (UID: \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327792 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327815 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327839 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10e31a3f-eb88-4c8b-93e7-e251f762d29e-serving-cert\") pod \"route-controller-manager-6576b87f9c-ns4jv\" (UID: \"10e31a3f-eb88-4c8b-93e7-e251f762d29e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327861 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c616c83f-0616-4a1d-b2ac-69cdc88eef70-service-ca-bundle\") pod \"authentication-operator-69f744f599-9h9p4\" (UID: \"c616c83f-0616-4a1d-b2ac-69cdc88eef70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9h9p4" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327896 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgbg7\" (UniqueName: \"kubernetes.io/projected/10e31a3f-eb88-4c8b-93e7-e251f762d29e-kube-api-access-lgbg7\") pod \"route-controller-manager-6576b87f9c-ns4jv\" (UID: \"10e31a3f-eb88-4c8b-93e7-e251f762d29e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327919 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c616c83f-0616-4a1d-b2ac-69cdc88eef70-config\") pod \"authentication-operator-69f744f599-9h9p4\" (UID: \"c616c83f-0616-4a1d-b2ac-69cdc88eef70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9h9p4" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327946 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.327975 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.328001 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-client-ca\") pod \"controller-manager-879f6c89f-rn6m5\" (UID: \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.328473 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.328609 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.328685 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.328764 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.328854 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.328894 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.328938 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.329023 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.329061 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.329121 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.329142 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.329210 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.329358 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.329879 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.330220 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.330242 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.330308 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.330415 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.330518 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.330625 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.330924 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.330960 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.331030 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.331128 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.331636 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-swwkp"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.332204 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-swwkp" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.332346 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qb2m5"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.332517 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.332654 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.332763 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.332896 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.332981 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.333002 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qb2m5" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.333165 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.334041 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.336252 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bkfrt"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.336341 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.336781 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l8kzl"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.336864 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bkfrt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.337359 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l8kzl" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.339662 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.341000 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.341228 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.341972 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.342720 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.343785 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jdhvt"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.343825 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xnkzj"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.344757 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-xnkzj" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.344961 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.345775 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.346538 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.346660 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.346677 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.347636 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.347796 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.352589 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.352774 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.362266 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqktd"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.363152 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.363170 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rn6m5"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.363246 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqktd" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.363268 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5pgcp"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.364185 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-v5c96"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.366090 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5pgcp" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.366812 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-gv2d7"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.366894 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v5c96" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.368963 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9h9p4"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.368991 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h885x"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.369109 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gv2d7" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.370396 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.373767 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-s7wqk"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.376311 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-s7wqk" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.381259 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-dllzn"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.385117 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qb2m5"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.385949 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-h8fzp"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.388780 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-c4m77"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.389122 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.390628 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-922ln"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.392889 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.395774 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7hs2"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.399582 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kmvhm"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.400971 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.403503 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.405228 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v4vk8"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.408707 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.408796 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5cs45"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.411010 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.412331 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hx8gq"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.414582 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7r9r7"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.415618 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.417637 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gv2d7"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.419319 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bkfrt"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.420869 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xnkzj"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.423108 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-n7dd8"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.423600 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-n7dd8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.424853 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l8kzl"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.426391 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pcmgp"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.428488 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g8dqb"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.428709 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.429033 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7d035ce-f026-4668-9fca-c344f1fe60e3-secret-volume\") pod \"collect-profiles-29536965-kh5p2\" (UID: \"b7d035ce-f026-4668-9fca-c344f1fe60e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.429080 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41343732-7cc4-4fe6-9435-8a6332ba522c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5f96t\" (UID: \"41343732-7cc4-4fe6-9435-8a6332ba522c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5f96t" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.429110 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2eb44459-26b2-48d3-931c-e718dda5133b-proxy-tls\") pod \"machine-config-operator-74547568cd-v46wf\" (UID: \"2eb44459-26b2-48d3-931c-e718dda5133b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.429298 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/981ad6df-4f80-446c-83a8-cf8e4bc7436d-config\") pod \"console-operator-58897d9998-7mfkz\" (UID: \"981ad6df-4f80-446c-83a8-cf8e4bc7436d\") " pod="openshift-console-operator/console-operator-58897d9998-7mfkz" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.429337 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41343732-7cc4-4fe6-9435-8a6332ba522c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5f96t\" (UID: \"41343732-7cc4-4fe6-9435-8a6332ba522c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5f96t" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.429356 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th628\" (UniqueName: \"kubernetes.io/projected/b7d035ce-f026-4668-9fca-c344f1fe60e3-kube-api-access-th628\") pod \"collect-profiles-29536965-kh5p2\" (UID: \"b7d035ce-f026-4668-9fca-c344f1fe60e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.429375 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/deda0ab1-f81e-4898-b4cb-5627947b5ed4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.429392 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2eb44459-26b2-48d3-931c-e718dda5133b-images\") pod \"machine-config-operator-74547568cd-v46wf\" (UID: \"2eb44459-26b2-48d3-931c-e718dda5133b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.429457 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h475c\" (UniqueName: \"kubernetes.io/projected/2eb44459-26b2-48d3-931c-e718dda5133b-kube-api-access-h475c\") pod \"machine-config-operator-74547568cd-v46wf\" (UID: \"2eb44459-26b2-48d3-931c-e718dda5133b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.429532 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-swwkp"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.429588 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.429755 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-oauth-serving-cert\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.429826 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41343732-7cc4-4fe6-9435-8a6332ba522c-config\") pod \"kube-apiserver-operator-766d6c64bb-5f96t\" (UID: \"41343732-7cc4-4fe6-9435-8a6332ba522c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5f96t" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.429908 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/46dc2075-e24c-46cf-9885-3de46322461d-etcd-client\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.429983 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05d54587-469b-407f-aa60-f0fdefb9dce7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cpb6p\" (UID: \"05d54587-469b-407f-aa60-f0fdefb9dce7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.430068 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/deda0ab1-f81e-4898-b4cb-5627947b5ed4-image-import-ca\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.430144 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bbbe9d1b-2a55-4b34-b452-32f51eef3278-available-featuregates\") pod \"openshift-config-operator-7777fb866f-h8fzp\" (UID: \"bbbe9d1b-2a55-4b34-b452-32f51eef3278\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-h8fzp" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.430297 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37a55d62-b540-4883-9548-3c0da02e8824-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2vmct\" (UID: \"37a55d62-b540-4883-9548-3c0da02e8824\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2vmct" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.430469 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bbbe9d1b-2a55-4b34-b452-32f51eef3278-available-featuregates\") pod \"openshift-config-operator-7777fb866f-h8fzp\" (UID: \"bbbe9d1b-2a55-4b34-b452-32f51eef3278\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-h8fzp" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.430599 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mj9r\" (UniqueName: \"kubernetes.io/projected/3667fc6c-078a-4be4-95ff-7174c74faf2c-kube-api-access-8mj9r\") pod \"dns-operator-744455d44c-c4m77\" (UID: \"3667fc6c-078a-4be4-95ff-7174c74faf2c\") " pod="openshift-dns-operator/dns-operator-744455d44c-c4m77" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.430704 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23834dc7-7dcc-4a63-b666-3c1829da4cf4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l8kzl\" (UID: \"23834dc7-7dcc-4a63-b666-3c1829da4cf4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l8kzl" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.430798 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46dc2075-e24c-46cf-9885-3de46322461d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.430899 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bedbc35-5c52-4c25-a77b-dcbc4a5dbc21-config\") pod \"openshift-apiserver-operator-796bbdcf4f-h885x\" (UID: \"3bedbc35-5c52-4c25-a77b-dcbc4a5dbc21\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h885x" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.430990 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c616c83f-0616-4a1d-b2ac-69cdc88eef70-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9h9p4\" (UID: \"c616c83f-0616-4a1d-b2ac-69cdc88eef70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9h9p4" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.431196 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-console-oauth-config\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.431293 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/deda0ab1-f81e-4898-b4cb-5627947b5ed4-encryption-config\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.431377 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a086194a-cb30-43f5-8cf2-f67d0f85a36d-serving-cert\") pod \"etcd-operator-b45778765-hx8gq\" (UID: \"a086194a-cb30-43f5-8cf2-f67d0f85a36d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.431477 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.431579 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/05d54587-469b-407f-aa60-f0fdefb9dce7-metrics-tls\") pod \"ingress-operator-5b745b69d9-cpb6p\" (UID: \"05d54587-469b-407f-aa60-f0fdefb9dce7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.431674 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-service-ca\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.431761 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.431868 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8071e59a-5e70-4dac-b36c-e17dd74f75b0-auth-proxy-config\") pod \"machine-approver-56656f9798-xmtml\" (UID: \"8071e59a-5e70-4dac-b36c-e17dd74f75b0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xmtml" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.431389 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46dc2075-e24c-46cf-9885-3de46322461d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.432040 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-config\") pod \"controller-manager-879f6c89f-rn6m5\" (UID: \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.432153 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-audit-policies\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.432259 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05d54587-469b-407f-aa60-f0fdefb9dce7-trusted-ca\") pod \"ingress-operator-5b745b69d9-cpb6p\" (UID: \"05d54587-469b-407f-aa60-f0fdefb9dce7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.432371 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d1ccfd2-99ab-4caf-82d3-6b58656de39f-service-ca-bundle\") pod \"router-default-5444994796-r858c\" (UID: \"2d1ccfd2-99ab-4caf-82d3-6b58656de39f\") " pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.432463 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhw69\" (UniqueName: \"kubernetes.io/projected/bbbe9d1b-2a55-4b34-b452-32f51eef3278-kube-api-access-mhw69\") pod \"openshift-config-operator-7777fb866f-h8fzp\" (UID: \"bbbe9d1b-2a55-4b34-b452-32f51eef3278\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-h8fzp" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.432563 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqr8k\" (UniqueName: \"kubernetes.io/projected/c616c83f-0616-4a1d-b2ac-69cdc88eef70-kube-api-access-wqr8k\") pod \"authentication-operator-69f744f599-9h9p4\" (UID: \"c616c83f-0616-4a1d-b2ac-69cdc88eef70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9h9p4" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.432672 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-console-serving-cert\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.432758 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1107c99b-98a7-4103-9e6c-dde234daacaf-images\") pod \"machine-api-operator-5694c8668f-jdhvt\" (UID: \"1107c99b-98a7-4103-9e6c-dde234daacaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdhvt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.432849 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-922ln\" (UID: \"9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-922ln" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.432947 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwjkt\" (UniqueName: \"kubernetes.io/projected/05d54587-469b-407f-aa60-f0fdefb9dce7-kube-api-access-zwjkt\") pod \"ingress-operator-5b745b69d9-cpb6p\" (UID: \"05d54587-469b-407f-aa60-f0fdefb9dce7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.433037 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.433149 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37a55d62-b540-4883-9548-3c0da02e8824-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2vmct\" (UID: \"37a55d62-b540-4883-9548-3c0da02e8824\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2vmct" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.433253 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46dc2075-e24c-46cf-9885-3de46322461d-audit-dir\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.432409 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c616c83f-0616-4a1d-b2ac-69cdc88eef70-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9h9p4\" (UID: \"c616c83f-0616-4a1d-b2ac-69cdc88eef70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9h9p4" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.433141 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.432788 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8071e59a-5e70-4dac-b36c-e17dd74f75b0-auth-proxy-config\") pod \"machine-approver-56656f9798-xmtml\" (UID: \"8071e59a-5e70-4dac-b36c-e17dd74f75b0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xmtml" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.433451 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46dc2075-e24c-46cf-9885-3de46322461d-audit-dir\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.430955 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.432972 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bedbc35-5c52-4c25-a77b-dcbc4a5dbc21-config\") pod \"openshift-apiserver-operator-796bbdcf4f-h885x\" (UID: \"3bedbc35-5c52-4c25-a77b-dcbc4a5dbc21\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h885x" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.433691 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.433787 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c616c83f-0616-4a1d-b2ac-69cdc88eef70-serving-cert\") pod \"authentication-operator-69f744f599-9h9p4\" (UID: \"c616c83f-0616-4a1d-b2ac-69cdc88eef70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9h9p4" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.433878 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/450816f5-eb2f-44e6-9b62-fd3f3b2fbf48-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j7hs2\" (UID: \"450816f5-eb2f-44e6-9b62-fd3f3b2fbf48\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7hs2" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.433735 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-audit-policies\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.434248 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm7pd\" (UniqueName: \"kubernetes.io/projected/981ad6df-4f80-446c-83a8-cf8e4bc7436d-kube-api-access-fm7pd\") pod \"console-operator-58897d9998-7mfkz\" (UID: \"981ad6df-4f80-446c-83a8-cf8e4bc7436d\") " pod="openshift-console-operator/console-operator-58897d9998-7mfkz" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.434363 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1107c99b-98a7-4103-9e6c-dde234daacaf-images\") pod \"machine-api-operator-5694c8668f-jdhvt\" (UID: \"1107c99b-98a7-4103-9e6c-dde234daacaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdhvt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.434249 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-config\") pod \"controller-manager-879f6c89f-rn6m5\" (UID: \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.434511 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/46dc2075-e24c-46cf-9885-3de46322461d-encryption-config\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.434618 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86f8ab04-83b9-497b-a8b4-cde27e61d568-audit-dir\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.434738 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58j65\" (UniqueName: \"kubernetes.io/projected/3bedbc35-5c52-4c25-a77b-dcbc4a5dbc21-kube-api-access-58j65\") pod \"openshift-apiserver-operator-796bbdcf4f-h885x\" (UID: \"3bedbc35-5c52-4c25-a77b-dcbc4a5dbc21\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h885x" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.434874 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3667fc6c-078a-4be4-95ff-7174c74faf2c-metrics-tls\") pod \"dns-operator-744455d44c-c4m77\" (UID: \"3667fc6c-078a-4be4-95ff-7174c74faf2c\") " pod="openshift-dns-operator/dns-operator-744455d44c-c4m77" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.434994 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-922ln\" (UID: \"9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-922ln" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.434944 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sfsnw"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.434739 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86f8ab04-83b9-497b-a8b4-cde27e61d568-audit-dir\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.435257 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/46dc2075-e24c-46cf-9885-3de46322461d-etcd-client\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.435168 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9dac184-b115-4584-8344-d4cfea132d7d-config\") pod \"kube-controller-manager-operator-78b949d7b-swwkp\" (UID: \"e9dac184-b115-4584-8344-d4cfea132d7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-swwkp" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.435518 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.436202 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.436562 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.436617 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8071e59a-5e70-4dac-b36c-e17dd74f75b0-machine-approver-tls\") pod \"machine-approver-56656f9798-xmtml\" (UID: \"8071e59a-5e70-4dac-b36c-e17dd74f75b0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xmtml" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.436691 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfzkt\" (UniqueName: \"kubernetes.io/projected/32d9179a-38c6-482f-95be-c94b48b83856-kube-api-access-nfzkt\") pod \"downloads-7954f5f757-t9grl\" (UID: \"32d9179a-38c6-482f-95be-c94b48b83856\") " pod="openshift-console/downloads-7954f5f757-t9grl" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.436714 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/981ad6df-4f80-446c-83a8-cf8e4bc7436d-trusted-ca\") pod \"console-operator-58897d9998-7mfkz\" (UID: \"981ad6df-4f80-446c-83a8-cf8e4bc7436d\") " pod="openshift-console-operator/console-operator-58897d9998-7mfkz" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.436734 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6ngw\" (UniqueName: \"kubernetes.io/projected/cd14f9a5-6dec-4df9-99a3-94c248ae4f60-kube-api-access-s6ngw\") pod \"openshift-controller-manager-operator-756b6f6bc6-sfsnw\" (UID: \"cd14f9a5-6dec-4df9-99a3-94c248ae4f60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sfsnw" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.436751 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d1ccfd2-99ab-4caf-82d3-6b58656de39f-metrics-certs\") pod \"router-default-5444994796-r858c\" (UID: \"2d1ccfd2-99ab-4caf-82d3-6b58656de39f\") " pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.436771 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.436788 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv654\" (UniqueName: \"kubernetes.io/projected/9904cc6d-0e10-4a0b-bd3b-c0e6592b4856-kube-api-access-cv654\") pod \"multus-admission-controller-857f4d67dd-xnkzj\" (UID: \"9904cc6d-0e10-4a0b-bd3b-c0e6592b4856\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xnkzj" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.436806 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c616c83f-0616-4a1d-b2ac-69cdc88eef70-service-ca-bundle\") pod \"authentication-operator-69f744f599-9h9p4\" (UID: \"c616c83f-0616-4a1d-b2ac-69cdc88eef70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9h9p4" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.436825 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-trusted-ca-bundle\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.436846 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgbg7\" (UniqueName: \"kubernetes.io/projected/10e31a3f-eb88-4c8b-93e7-e251f762d29e-kube-api-access-lgbg7\") pod \"route-controller-manager-6576b87f9c-ns4jv\" (UID: \"10e31a3f-eb88-4c8b-93e7-e251f762d29e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.436865 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c616c83f-0616-4a1d-b2ac-69cdc88eef70-config\") pod \"authentication-operator-69f744f599-9h9p4\" (UID: \"c616c83f-0616-4a1d-b2ac-69cdc88eef70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9h9p4" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.436883 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49xsz\" (UniqueName: \"kubernetes.io/projected/a086194a-cb30-43f5-8cf2-f67d0f85a36d-kube-api-access-49xsz\") pod \"etcd-operator-b45778765-hx8gq\" (UID: \"a086194a-cb30-43f5-8cf2-f67d0f85a36d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.436905 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.436921 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/deda0ab1-f81e-4898-b4cb-5627947b5ed4-node-pullsecrets\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.436937 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/01036322-73f2-4c61-b59d-ff9eff5d4b5f-srv-cert\") pod \"olm-operator-6b444d44fb-g2q7p\" (UID: \"01036322-73f2-4c61-b59d-ff9eff5d4b5f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.436952 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w45dz\" (UniqueName: \"kubernetes.io/projected/01036322-73f2-4c61-b59d-ff9eff5d4b5f-kube-api-access-w45dz\") pod \"olm-operator-6b444d44fb-g2q7p\" (UID: \"01036322-73f2-4c61-b59d-ff9eff5d4b5f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.436971 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-client-ca\") pod \"controller-manager-879f6c89f-rn6m5\" (UID: \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.436987 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd14f9a5-6dec-4df9-99a3-94c248ae4f60-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-sfsnw\" (UID: \"cd14f9a5-6dec-4df9-99a3-94c248ae4f60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sfsnw" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437005 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9dac184-b115-4584-8344-d4cfea132d7d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-swwkp\" (UID: \"e9dac184-b115-4584-8344-d4cfea132d7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-swwkp" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437023 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f25a5ec8-b28e-4b5c-a4f0-c6ad09e63b3d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5cs45\" (UID: \"f25a5ec8-b28e-4b5c-a4f0-c6ad09e63b3d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5cs45" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437038 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a086194a-cb30-43f5-8cf2-f67d0f85a36d-etcd-service-ca\") pod \"etcd-operator-b45778765-hx8gq\" (UID: \"a086194a-cb30-43f5-8cf2-f67d0f85a36d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437076 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-922ln\" (UID: \"9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-922ln" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437094 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a086194a-cb30-43f5-8cf2-f67d0f85a36d-etcd-client\") pod \"etcd-operator-b45778765-hx8gq\" (UID: \"a086194a-cb30-43f5-8cf2-f67d0f85a36d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437111 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-serving-cert\") pod \"controller-manager-879f6c89f-rn6m5\" (UID: \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437129 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437145 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/01036322-73f2-4c61-b59d-ff9eff5d4b5f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g2q7p\" (UID: \"01036322-73f2-4c61-b59d-ff9eff5d4b5f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437162 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd14f9a5-6dec-4df9-99a3-94c248ae4f60-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-sfsnw\" (UID: \"cd14f9a5-6dec-4df9-99a3-94c248ae4f60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sfsnw" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437182 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1107c99b-98a7-4103-9e6c-dde234daacaf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jdhvt\" (UID: \"1107c99b-98a7-4103-9e6c-dde234daacaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdhvt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437197 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deda0ab1-f81e-4898-b4cb-5627947b5ed4-serving-cert\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437210 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c616c83f-0616-4a1d-b2ac-69cdc88eef70-serving-cert\") pod \"authentication-operator-69f744f599-9h9p4\" (UID: \"c616c83f-0616-4a1d-b2ac-69cdc88eef70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9h9p4" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437212 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/981ad6df-4f80-446c-83a8-cf8e4bc7436d-serving-cert\") pod \"console-operator-58897d9998-7mfkz\" (UID: \"981ad6df-4f80-446c-83a8-cf8e4bc7436d\") " pod="openshift-console-operator/console-operator-58897d9998-7mfkz" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437268 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37a55d62-b540-4883-9548-3c0da02e8824-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2vmct\" (UID: \"37a55d62-b540-4883-9548-3c0da02e8824\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2vmct" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437300 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgfq4\" (UniqueName: \"kubernetes.io/projected/d5b69559-dbbb-451e-8a89-0d8c61a363f3-kube-api-access-pgfq4\") pod \"control-plane-machine-set-operator-78cbb6b69f-bkfrt\" (UID: \"d5b69559-dbbb-451e-8a89-0d8c61a363f3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bkfrt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437322 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9zw\" (UniqueName: \"kubernetes.io/projected/deda0ab1-f81e-4898-b4cb-5627947b5ed4-kube-api-access-5x9zw\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437342 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/23834dc7-7dcc-4a63-b666-3c1829da4cf4-proxy-tls\") pod \"machine-config-controller-84d6567774-l8kzl\" (UID: \"23834dc7-7dcc-4a63-b666-3c1829da4cf4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l8kzl" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437364 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/deda0ab1-f81e-4898-b4cb-5627947b5ed4-audit-dir\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437389 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/46dc2075-e24c-46cf-9885-3de46322461d-audit-policies\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437412 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzdvc\" (UniqueName: \"kubernetes.io/projected/46dc2075-e24c-46cf-9885-3de46322461d-kube-api-access-hzdvc\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437435 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1107c99b-98a7-4103-9e6c-dde234daacaf-config\") pod \"machine-api-operator-5694c8668f-jdhvt\" (UID: \"1107c99b-98a7-4103-9e6c-dde234daacaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdhvt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437459 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/46dc2075-e24c-46cf-9885-3de46322461d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437482 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a086194a-cb30-43f5-8cf2-f67d0f85a36d-config\") pod \"etcd-operator-b45778765-hx8gq\" (UID: \"a086194a-cb30-43f5-8cf2-f67d0f85a36d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437503 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rn6m5\" (UID: \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437527 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bedbc35-5c52-4c25-a77b-dcbc4a5dbc21-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-h885x\" (UID: \"3bedbc35-5c52-4c25-a77b-dcbc4a5dbc21\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h885x" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437550 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2d1ccfd2-99ab-4caf-82d3-6b58656de39f-stats-auth\") pod \"router-default-5444994796-r858c\" (UID: \"2d1ccfd2-99ab-4caf-82d3-6b58656de39f\") " pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437577 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrlns\" (UniqueName: \"kubernetes.io/projected/1107c99b-98a7-4103-9e6c-dde234daacaf-kube-api-access-mrlns\") pod \"machine-api-operator-5694c8668f-jdhvt\" (UID: \"1107c99b-98a7-4103-9e6c-dde234daacaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdhvt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437599 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7d035ce-f026-4668-9fca-c344f1fe60e3-config-volume\") pod \"collect-profiles-29536965-kh5p2\" (UID: \"b7d035ce-f026-4668-9fca-c344f1fe60e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437622 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf5mb\" (UniqueName: \"kubernetes.io/projected/86f8ab04-83b9-497b-a8b4-cde27e61d568-kube-api-access-nf5mb\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437642 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/deda0ab1-f81e-4898-b4cb-5627947b5ed4-audit\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437655 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c616c83f-0616-4a1d-b2ac-69cdc88eef70-service-ca-bundle\") pod \"authentication-operator-69f744f599-9h9p4\" (UID: \"c616c83f-0616-4a1d-b2ac-69cdc88eef70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9h9p4" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437663 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3b236af-9b1b-45f2-8780-ea9d4726bd0f-srv-cert\") pod \"catalog-operator-68c6474976-85vdw\" (UID: \"b3b236af-9b1b-45f2-8780-ea9d4726bd0f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437237 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/46dc2075-e24c-46cf-9885-3de46322461d-encryption-config\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437693 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3b236af-9b1b-45f2-8780-ea9d4726bd0f-profile-collector-cert\") pod \"catalog-operator-68c6474976-85vdw\" (UID: \"b3b236af-9b1b-45f2-8780-ea9d4726bd0f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.437768 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c616c83f-0616-4a1d-b2ac-69cdc88eef70-config\") pod \"authentication-operator-69f744f599-9h9p4\" (UID: \"c616c83f-0616-4a1d-b2ac-69cdc88eef70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9h9p4" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.438006 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.438308 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/46dc2075-e24c-46cf-9885-3de46322461d-audit-policies\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.438367 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.438721 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.438752 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-client-ca\") pod \"controller-manager-879f6c89f-rn6m5\" (UID: \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.438788 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m49pz\" (UniqueName: \"kubernetes.io/projected/2d1ccfd2-99ab-4caf-82d3-6b58656de39f-kube-api-access-m49pz\") pod \"router-default-5444994796-r858c\" (UID: \"2d1ccfd2-99ab-4caf-82d3-6b58656de39f\") " pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.438811 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10e31a3f-eb88-4c8b-93e7-e251f762d29e-client-ca\") pod \"route-controller-manager-6576b87f9c-ns4jv\" (UID: \"10e31a3f-eb88-4c8b-93e7-e251f762d29e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.438828 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2eb44459-26b2-48d3-931c-e718dda5133b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-v46wf\" (UID: \"2eb44459-26b2-48d3-931c-e718dda5133b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.438844 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8071e59a-5e70-4dac-b36c-e17dd74f75b0-config\") pod \"machine-approver-56656f9798-xmtml\" (UID: \"8071e59a-5e70-4dac-b36c-e17dd74f75b0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xmtml" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.438862 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2h8x\" (UniqueName: \"kubernetes.io/projected/9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc-kube-api-access-g2h8x\") pod \"cluster-image-registry-operator-dc59b4c8b-922ln\" (UID: \"9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-922ln" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.438863 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/46dc2075-e24c-46cf-9885-3de46322461d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.439188 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8071e59a-5e70-4dac-b36c-e17dd74f75b0-config\") pod \"machine-approver-56656f9798-xmtml\" (UID: \"8071e59a-5e70-4dac-b36c-e17dd74f75b0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xmtml" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.439217 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deda0ab1-f81e-4898-b4cb-5627947b5ed4-config\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.439236 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89kz9\" (UniqueName: \"kubernetes.io/projected/79fd2558-1055-4895-9db5-58da8eb8aacf-kube-api-access-89kz9\") pod \"migrator-59844c95c7-qb2m5\" (UID: \"79fd2558-1055-4895-9db5-58da8eb8aacf\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qb2m5" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.439258 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.439278 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d5b69559-dbbb-451e-8a89-0d8c61a363f3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bkfrt\" (UID: \"d5b69559-dbbb-451e-8a89-0d8c61a363f3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bkfrt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.439417 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rn6m5\" (UID: \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.439850 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/deda0ab1-f81e-4898-b4cb-5627947b5ed4-etcd-serving-ca\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.439918 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9904cc6d-0e10-4a0b-bd3b-c0e6592b4856-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xnkzj\" (UID: \"9904cc6d-0e10-4a0b-bd3b-c0e6592b4856\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xnkzj" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.439933 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.439941 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10e31a3f-eb88-4c8b-93e7-e251f762d29e-client-ca\") pod \"route-controller-manager-6576b87f9c-ns4jv\" (UID: \"10e31a3f-eb88-4c8b-93e7-e251f762d29e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.439984 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjbj5\" (UniqueName: \"kubernetes.io/projected/f25a5ec8-b28e-4b5c-a4f0-c6ad09e63b3d-kube-api-access-mjbj5\") pod \"kube-storage-version-migrator-operator-b67b599dd-5cs45\" (UID: \"f25a5ec8-b28e-4b5c-a4f0-c6ad09e63b3d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5cs45" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.440017 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bptxk\" (UniqueName: \"kubernetes.io/projected/450816f5-eb2f-44e6-9b62-fd3f3b2fbf48-kube-api-access-bptxk\") pod \"cluster-samples-operator-665b6dd947-j7hs2\" (UID: \"450816f5-eb2f-44e6-9b62-fd3f3b2fbf48\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7hs2" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.440042 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tckjt\" (UniqueName: \"kubernetes.io/projected/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-kube-api-access-tckjt\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.440083 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94hqc\" (UniqueName: \"kubernetes.io/projected/23834dc7-7dcc-4a63-b666-3c1829da4cf4-kube-api-access-94hqc\") pod \"machine-config-controller-84d6567774-l8kzl\" (UID: \"23834dc7-7dcc-4a63-b666-3c1829da4cf4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l8kzl" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.440106 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2d1ccfd2-99ab-4caf-82d3-6b58656de39f-default-certificate\") pod \"router-default-5444994796-r858c\" (UID: \"2d1ccfd2-99ab-4caf-82d3-6b58656de39f\") " pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.440168 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7mfkz"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.440183 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.440193 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10e31a3f-eb88-4c8b-93e7-e251f762d29e-config\") pod \"route-controller-manager-6576b87f9c-ns4jv\" (UID: \"10e31a3f-eb88-4c8b-93e7-e251f762d29e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.440223 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/deda0ab1-f81e-4898-b4cb-5627947b5ed4-etcd-client\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.440276 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.440344 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v87jv\" (UniqueName: \"kubernetes.io/projected/8071e59a-5e70-4dac-b36c-e17dd74f75b0-kube-api-access-v87jv\") pod \"machine-approver-56656f9798-xmtml\" (UID: \"8071e59a-5e70-4dac-b36c-e17dd74f75b0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xmtml" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.440671 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1107c99b-98a7-4103-9e6c-dde234daacaf-config\") pod \"machine-api-operator-5694c8668f-jdhvt\" (UID: \"1107c99b-98a7-4103-9e6c-dde234daacaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdhvt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.440684 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbbe9d1b-2a55-4b34-b452-32f51eef3278-serving-cert\") pod \"openshift-config-operator-7777fb866f-h8fzp\" (UID: \"bbbe9d1b-2a55-4b34-b452-32f51eef3278\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-h8fzp" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.440782 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9dac184-b115-4584-8344-d4cfea132d7d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-swwkp\" (UID: \"e9dac184-b115-4584-8344-d4cfea132d7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-swwkp" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.440829 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46dc2075-e24c-46cf-9885-3de46322461d-serving-cert\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.440860 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdvf5\" (UniqueName: \"kubernetes.io/projected/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-kube-api-access-rdvf5\") pod \"controller-manager-879f6c89f-rn6m5\" (UID: \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.440970 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a086194a-cb30-43f5-8cf2-f67d0f85a36d-etcd-ca\") pod \"etcd-operator-b45778765-hx8gq\" (UID: \"a086194a-cb30-43f5-8cf2-f67d0f85a36d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.441014 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10e31a3f-eb88-4c8b-93e7-e251f762d29e-config\") pod \"route-controller-manager-6576b87f9c-ns4jv\" (UID: \"10e31a3f-eb88-4c8b-93e7-e251f762d29e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.441083 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10e31a3f-eb88-4c8b-93e7-e251f762d29e-serving-cert\") pod \"route-controller-manager-6576b87f9c-ns4jv\" (UID: \"10e31a3f-eb88-4c8b-93e7-e251f762d29e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.441105 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f25a5ec8-b28e-4b5c-a4f0-c6ad09e63b3d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5cs45\" (UID: \"f25a5ec8-b28e-4b5c-a4f0-c6ad09e63b3d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5cs45" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.441199 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-console-config\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.441224 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh69n\" (UniqueName: \"kubernetes.io/projected/b3b236af-9b1b-45f2-8780-ea9d4726bd0f-kube-api-access-fh69n\") pod \"catalog-operator-68c6474976-85vdw\" (UID: \"b3b236af-9b1b-45f2-8780-ea9d4726bd0f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.442208 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.442478 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1107c99b-98a7-4103-9e6c-dde234daacaf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jdhvt\" (UID: \"1107c99b-98a7-4103-9e6c-dde234daacaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdhvt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.442530 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.443228 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.443320 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbbe9d1b-2a55-4b34-b452-32f51eef3278-serving-cert\") pod \"openshift-config-operator-7777fb866f-h8fzp\" (UID: \"bbbe9d1b-2a55-4b34-b452-32f51eef3278\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-h8fzp" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.443668 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-serving-cert\") pod \"controller-manager-879f6c89f-rn6m5\" (UID: \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.443811 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-t9grl"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.444571 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10e31a3f-eb88-4c8b-93e7-e251f762d29e-serving-cert\") pod \"route-controller-manager-6576b87f9c-ns4jv\" (UID: \"10e31a3f-eb88-4c8b-93e7-e251f762d29e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.445025 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bedbc35-5c52-4c25-a77b-dcbc4a5dbc21-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-h885x\" (UID: \"3bedbc35-5c52-4c25-a77b-dcbc4a5dbc21\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h885x" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.445418 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46dc2075-e24c-46cf-9885-3de46322461d-serving-cert\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.445692 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8071e59a-5e70-4dac-b36c-e17dd74f75b0-machine-approver-tls\") pod \"machine-approver-56656f9798-xmtml\" (UID: \"8071e59a-5e70-4dac-b36c-e17dd74f75b0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xmtml" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.445761 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7r9r7"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.446717 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.447682 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5f96t"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.448406 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.448843 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-v5c96"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.450301 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqktd"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.451282 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2vmct"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.452225 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-n7dd8"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.453240 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5pgcp"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.454201 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.455150 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2"] Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.468018 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.488429 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.507801 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.528513 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.541821 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2eb44459-26b2-48d3-931c-e718dda5133b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-v46wf\" (UID: \"2eb44459-26b2-48d3-931c-e718dda5133b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.541864 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2h8x\" (UniqueName: \"kubernetes.io/projected/9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc-kube-api-access-g2h8x\") pod \"cluster-image-registry-operator-dc59b4c8b-922ln\" (UID: \"9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-922ln" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.541887 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deda0ab1-f81e-4898-b4cb-5627947b5ed4-config\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.541905 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89kz9\" (UniqueName: \"kubernetes.io/projected/79fd2558-1055-4895-9db5-58da8eb8aacf-kube-api-access-89kz9\") pod \"migrator-59844c95c7-qb2m5\" (UID: \"79fd2558-1055-4895-9db5-58da8eb8aacf\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qb2m5" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.541939 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d5b69559-dbbb-451e-8a89-0d8c61a363f3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bkfrt\" (UID: \"d5b69559-dbbb-451e-8a89-0d8c61a363f3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bkfrt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.541957 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9904cc6d-0e10-4a0b-bd3b-c0e6592b4856-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xnkzj\" (UID: \"9904cc6d-0e10-4a0b-bd3b-c0e6592b4856\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xnkzj" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.541972 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/deda0ab1-f81e-4898-b4cb-5627947b5ed4-etcd-serving-ca\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542087 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjbj5\" (UniqueName: \"kubernetes.io/projected/f25a5ec8-b28e-4b5c-a4f0-c6ad09e63b3d-kube-api-access-mjbj5\") pod \"kube-storage-version-migrator-operator-b67b599dd-5cs45\" (UID: \"f25a5ec8-b28e-4b5c-a4f0-c6ad09e63b3d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5cs45" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542109 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bptxk\" (UniqueName: \"kubernetes.io/projected/450816f5-eb2f-44e6-9b62-fd3f3b2fbf48-kube-api-access-bptxk\") pod \"cluster-samples-operator-665b6dd947-j7hs2\" (UID: \"450816f5-eb2f-44e6-9b62-fd3f3b2fbf48\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7hs2" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542125 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tckjt\" (UniqueName: \"kubernetes.io/projected/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-kube-api-access-tckjt\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542157 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94hqc\" (UniqueName: \"kubernetes.io/projected/23834dc7-7dcc-4a63-b666-3c1829da4cf4-kube-api-access-94hqc\") pod \"machine-config-controller-84d6567774-l8kzl\" (UID: \"23834dc7-7dcc-4a63-b666-3c1829da4cf4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l8kzl" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542176 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2d1ccfd2-99ab-4caf-82d3-6b58656de39f-default-certificate\") pod \"router-default-5444994796-r858c\" (UID: \"2d1ccfd2-99ab-4caf-82d3-6b58656de39f\") " pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542195 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/deda0ab1-f81e-4898-b4cb-5627947b5ed4-etcd-client\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542238 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9dac184-b115-4584-8344-d4cfea132d7d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-swwkp\" (UID: \"e9dac184-b115-4584-8344-d4cfea132d7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-swwkp" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542266 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f25a5ec8-b28e-4b5c-a4f0-c6ad09e63b3d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5cs45\" (UID: \"f25a5ec8-b28e-4b5c-a4f0-c6ad09e63b3d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5cs45" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542282 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a086194a-cb30-43f5-8cf2-f67d0f85a36d-etcd-ca\") pod \"etcd-operator-b45778765-hx8gq\" (UID: \"a086194a-cb30-43f5-8cf2-f67d0f85a36d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542314 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh69n\" (UniqueName: \"kubernetes.io/projected/b3b236af-9b1b-45f2-8780-ea9d4726bd0f-kube-api-access-fh69n\") pod \"catalog-operator-68c6474976-85vdw\" (UID: \"b3b236af-9b1b-45f2-8780-ea9d4726bd0f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542331 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-console-config\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542347 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7d035ce-f026-4668-9fca-c344f1fe60e3-secret-volume\") pod \"collect-profiles-29536965-kh5p2\" (UID: \"b7d035ce-f026-4668-9fca-c344f1fe60e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542364 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41343732-7cc4-4fe6-9435-8a6332ba522c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5f96t\" (UID: \"41343732-7cc4-4fe6-9435-8a6332ba522c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5f96t" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542403 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2eb44459-26b2-48d3-931c-e718dda5133b-proxy-tls\") pod \"machine-config-operator-74547568cd-v46wf\" (UID: \"2eb44459-26b2-48d3-931c-e718dda5133b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542422 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th628\" (UniqueName: \"kubernetes.io/projected/b7d035ce-f026-4668-9fca-c344f1fe60e3-kube-api-access-th628\") pod \"collect-profiles-29536965-kh5p2\" (UID: \"b7d035ce-f026-4668-9fca-c344f1fe60e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542437 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/981ad6df-4f80-446c-83a8-cf8e4bc7436d-config\") pod \"console-operator-58897d9998-7mfkz\" (UID: \"981ad6df-4f80-446c-83a8-cf8e4bc7436d\") " pod="openshift-console-operator/console-operator-58897d9998-7mfkz" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542468 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41343732-7cc4-4fe6-9435-8a6332ba522c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5f96t\" (UID: \"41343732-7cc4-4fe6-9435-8a6332ba522c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5f96t" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542487 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/deda0ab1-f81e-4898-b4cb-5627947b5ed4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542504 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2eb44459-26b2-48d3-931c-e718dda5133b-images\") pod \"machine-config-operator-74547568cd-v46wf\" (UID: \"2eb44459-26b2-48d3-931c-e718dda5133b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542519 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h475c\" (UniqueName: \"kubernetes.io/projected/2eb44459-26b2-48d3-931c-e718dda5133b-kube-api-access-h475c\") pod \"machine-config-operator-74547568cd-v46wf\" (UID: \"2eb44459-26b2-48d3-931c-e718dda5133b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542552 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-oauth-serving-cert\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542569 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41343732-7cc4-4fe6-9435-8a6332ba522c-config\") pod \"kube-apiserver-operator-766d6c64bb-5f96t\" (UID: \"41343732-7cc4-4fe6-9435-8a6332ba522c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5f96t" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542585 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05d54587-469b-407f-aa60-f0fdefb9dce7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cpb6p\" (UID: \"05d54587-469b-407f-aa60-f0fdefb9dce7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542616 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/deda0ab1-f81e-4898-b4cb-5627947b5ed4-image-import-ca\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542634 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37a55d62-b540-4883-9548-3c0da02e8824-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2vmct\" (UID: \"37a55d62-b540-4883-9548-3c0da02e8824\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2vmct" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542650 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mj9r\" (UniqueName: \"kubernetes.io/projected/3667fc6c-078a-4be4-95ff-7174c74faf2c-kube-api-access-8mj9r\") pod \"dns-operator-744455d44c-c4m77\" (UID: \"3667fc6c-078a-4be4-95ff-7174c74faf2c\") " pod="openshift-dns-operator/dns-operator-744455d44c-c4m77" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542665 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23834dc7-7dcc-4a63-b666-3c1829da4cf4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l8kzl\" (UID: \"23834dc7-7dcc-4a63-b666-3c1829da4cf4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l8kzl" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542694 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-console-oauth-config\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542709 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/deda0ab1-f81e-4898-b4cb-5627947b5ed4-encryption-config\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542723 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a086194a-cb30-43f5-8cf2-f67d0f85a36d-serving-cert\") pod \"etcd-operator-b45778765-hx8gq\" (UID: \"a086194a-cb30-43f5-8cf2-f67d0f85a36d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542743 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/05d54587-469b-407f-aa60-f0fdefb9dce7-metrics-tls\") pod \"ingress-operator-5b745b69d9-cpb6p\" (UID: \"05d54587-469b-407f-aa60-f0fdefb9dce7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542772 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-service-ca\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542789 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05d54587-469b-407f-aa60-f0fdefb9dce7-trusted-ca\") pod \"ingress-operator-5b745b69d9-cpb6p\" (UID: \"05d54587-469b-407f-aa60-f0fdefb9dce7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542807 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d1ccfd2-99ab-4caf-82d3-6b58656de39f-service-ca-bundle\") pod \"router-default-5444994796-r858c\" (UID: \"2d1ccfd2-99ab-4caf-82d3-6b58656de39f\") " pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542848 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-console-serving-cert\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542863 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-922ln\" (UID: \"9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-922ln" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542880 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwjkt\" (UniqueName: \"kubernetes.io/projected/05d54587-469b-407f-aa60-f0fdefb9dce7-kube-api-access-zwjkt\") pod \"ingress-operator-5b745b69d9-cpb6p\" (UID: \"05d54587-469b-407f-aa60-f0fdefb9dce7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542896 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37a55d62-b540-4883-9548-3c0da02e8824-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2vmct\" (UID: \"37a55d62-b540-4883-9548-3c0da02e8824\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2vmct" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542936 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/450816f5-eb2f-44e6-9b62-fd3f3b2fbf48-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j7hs2\" (UID: \"450816f5-eb2f-44e6-9b62-fd3f3b2fbf48\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7hs2" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542953 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm7pd\" (UniqueName: \"kubernetes.io/projected/981ad6df-4f80-446c-83a8-cf8e4bc7436d-kube-api-access-fm7pd\") pod \"console-operator-58897d9998-7mfkz\" (UID: \"981ad6df-4f80-446c-83a8-cf8e4bc7436d\") " pod="openshift-console-operator/console-operator-58897d9998-7mfkz" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542951 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2eb44459-26b2-48d3-931c-e718dda5133b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-v46wf\" (UID: \"2eb44459-26b2-48d3-931c-e718dda5133b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.542968 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3667fc6c-078a-4be4-95ff-7174c74faf2c-metrics-tls\") pod \"dns-operator-744455d44c-c4m77\" (UID: \"3667fc6c-078a-4be4-95ff-7174c74faf2c\") " pod="openshift-dns-operator/dns-operator-744455d44c-c4m77" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543110 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-922ln\" (UID: \"9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-922ln" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543155 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9dac184-b115-4584-8344-d4cfea132d7d-config\") pod \"kube-controller-manager-operator-78b949d7b-swwkp\" (UID: \"e9dac184-b115-4584-8344-d4cfea132d7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-swwkp" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543196 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfzkt\" (UniqueName: \"kubernetes.io/projected/32d9179a-38c6-482f-95be-c94b48b83856-kube-api-access-nfzkt\") pod \"downloads-7954f5f757-t9grl\" (UID: \"32d9179a-38c6-482f-95be-c94b48b83856\") " pod="openshift-console/downloads-7954f5f757-t9grl" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543231 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/981ad6df-4f80-446c-83a8-cf8e4bc7436d-trusted-ca\") pod \"console-operator-58897d9998-7mfkz\" (UID: \"981ad6df-4f80-446c-83a8-cf8e4bc7436d\") " pod="openshift-console-operator/console-operator-58897d9998-7mfkz" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543271 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6ngw\" (UniqueName: \"kubernetes.io/projected/cd14f9a5-6dec-4df9-99a3-94c248ae4f60-kube-api-access-s6ngw\") pod \"openshift-controller-manager-operator-756b6f6bc6-sfsnw\" (UID: \"cd14f9a5-6dec-4df9-99a3-94c248ae4f60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sfsnw" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543312 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d1ccfd2-99ab-4caf-82d3-6b58656de39f-metrics-certs\") pod \"router-default-5444994796-r858c\" (UID: \"2d1ccfd2-99ab-4caf-82d3-6b58656de39f\") " pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543348 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv654\" (UniqueName: \"kubernetes.io/projected/9904cc6d-0e10-4a0b-bd3b-c0e6592b4856-kube-api-access-cv654\") pod \"multus-admission-controller-857f4d67dd-xnkzj\" (UID: \"9904cc6d-0e10-4a0b-bd3b-c0e6592b4856\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xnkzj" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543396 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-trusted-ca-bundle\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543438 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49xsz\" (UniqueName: \"kubernetes.io/projected/a086194a-cb30-43f5-8cf2-f67d0f85a36d-kube-api-access-49xsz\") pod \"etcd-operator-b45778765-hx8gq\" (UID: \"a086194a-cb30-43f5-8cf2-f67d0f85a36d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543483 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/deda0ab1-f81e-4898-b4cb-5627947b5ed4-node-pullsecrets\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543517 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/01036322-73f2-4c61-b59d-ff9eff5d4b5f-srv-cert\") pod \"olm-operator-6b444d44fb-g2q7p\" (UID: \"01036322-73f2-4c61-b59d-ff9eff5d4b5f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543550 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w45dz\" (UniqueName: \"kubernetes.io/projected/01036322-73f2-4c61-b59d-ff9eff5d4b5f-kube-api-access-w45dz\") pod \"olm-operator-6b444d44fb-g2q7p\" (UID: \"01036322-73f2-4c61-b59d-ff9eff5d4b5f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543614 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd14f9a5-6dec-4df9-99a3-94c248ae4f60-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-sfsnw\" (UID: \"cd14f9a5-6dec-4df9-99a3-94c248ae4f60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sfsnw" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543648 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9dac184-b115-4584-8344-d4cfea132d7d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-swwkp\" (UID: \"e9dac184-b115-4584-8344-d4cfea132d7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-swwkp" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543686 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f25a5ec8-b28e-4b5c-a4f0-c6ad09e63b3d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5cs45\" (UID: \"f25a5ec8-b28e-4b5c-a4f0-c6ad09e63b3d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5cs45" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543722 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a086194a-cb30-43f5-8cf2-f67d0f85a36d-etcd-service-ca\") pod \"etcd-operator-b45778765-hx8gq\" (UID: \"a086194a-cb30-43f5-8cf2-f67d0f85a36d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543749 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-console-config\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543756 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a086194a-cb30-43f5-8cf2-f67d0f85a36d-etcd-client\") pod \"etcd-operator-b45778765-hx8gq\" (UID: \"a086194a-cb30-43f5-8cf2-f67d0f85a36d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543814 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-922ln\" (UID: \"9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-922ln" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543845 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/01036322-73f2-4c61-b59d-ff9eff5d4b5f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g2q7p\" (UID: \"01036322-73f2-4c61-b59d-ff9eff5d4b5f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543863 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd14f9a5-6dec-4df9-99a3-94c248ae4f60-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-sfsnw\" (UID: \"cd14f9a5-6dec-4df9-99a3-94c248ae4f60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sfsnw" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543886 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgfq4\" (UniqueName: \"kubernetes.io/projected/d5b69559-dbbb-451e-8a89-0d8c61a363f3-kube-api-access-pgfq4\") pod \"control-plane-machine-set-operator-78cbb6b69f-bkfrt\" (UID: \"d5b69559-dbbb-451e-8a89-0d8c61a363f3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bkfrt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543904 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deda0ab1-f81e-4898-b4cb-5627947b5ed4-serving-cert\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543920 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/981ad6df-4f80-446c-83a8-cf8e4bc7436d-serving-cert\") pod \"console-operator-58897d9998-7mfkz\" (UID: \"981ad6df-4f80-446c-83a8-cf8e4bc7436d\") " pod="openshift-console-operator/console-operator-58897d9998-7mfkz" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543937 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37a55d62-b540-4883-9548-3c0da02e8824-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2vmct\" (UID: \"37a55d62-b540-4883-9548-3c0da02e8824\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2vmct" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543953 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/deda0ab1-f81e-4898-b4cb-5627947b5ed4-audit-dir\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543970 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x9zw\" (UniqueName: \"kubernetes.io/projected/deda0ab1-f81e-4898-b4cb-5627947b5ed4-kube-api-access-5x9zw\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543988 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/23834dc7-7dcc-4a63-b666-3c1829da4cf4-proxy-tls\") pod \"machine-config-controller-84d6567774-l8kzl\" (UID: \"23834dc7-7dcc-4a63-b666-3c1829da4cf4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l8kzl" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.544015 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a086194a-cb30-43f5-8cf2-f67d0f85a36d-config\") pod \"etcd-operator-b45778765-hx8gq\" (UID: \"a086194a-cb30-43f5-8cf2-f67d0f85a36d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.544032 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2d1ccfd2-99ab-4caf-82d3-6b58656de39f-stats-auth\") pod \"router-default-5444994796-r858c\" (UID: \"2d1ccfd2-99ab-4caf-82d3-6b58656de39f\") " pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.544082 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7d035ce-f026-4668-9fca-c344f1fe60e3-config-volume\") pod \"collect-profiles-29536965-kh5p2\" (UID: \"b7d035ce-f026-4668-9fca-c344f1fe60e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.544109 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/deda0ab1-f81e-4898-b4cb-5627947b5ed4-audit\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.544126 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3b236af-9b1b-45f2-8780-ea9d4726bd0f-srv-cert\") pod \"catalog-operator-68c6474976-85vdw\" (UID: \"b3b236af-9b1b-45f2-8780-ea9d4726bd0f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.544142 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3b236af-9b1b-45f2-8780-ea9d4726bd0f-profile-collector-cert\") pod \"catalog-operator-68c6474976-85vdw\" (UID: \"b3b236af-9b1b-45f2-8780-ea9d4726bd0f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.544159 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m49pz\" (UniqueName: \"kubernetes.io/projected/2d1ccfd2-99ab-4caf-82d3-6b58656de39f-kube-api-access-m49pz\") pod \"router-default-5444994796-r858c\" (UID: \"2d1ccfd2-99ab-4caf-82d3-6b58656de39f\") " pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.544913 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/deda0ab1-f81e-4898-b4cb-5627947b5ed4-node-pullsecrets\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.545044 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-oauth-serving-cert\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.545141 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/981ad6df-4f80-446c-83a8-cf8e4bc7436d-config\") pod \"console-operator-58897d9998-7mfkz\" (UID: \"981ad6df-4f80-446c-83a8-cf8e4bc7436d\") " pod="openshift-console-operator/console-operator-58897d9998-7mfkz" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.544925 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/deda0ab1-f81e-4898-b4cb-5627947b5ed4-audit-dir\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.543160 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a086194a-cb30-43f5-8cf2-f67d0f85a36d-etcd-ca\") pod \"etcd-operator-b45778765-hx8gq\" (UID: \"a086194a-cb30-43f5-8cf2-f67d0f85a36d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.545719 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-trusted-ca-bundle\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.545941 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd14f9a5-6dec-4df9-99a3-94c248ae4f60-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-sfsnw\" (UID: \"cd14f9a5-6dec-4df9-99a3-94c248ae4f60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sfsnw" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.545984 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-service-ca\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.546257 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41343732-7cc4-4fe6-9435-8a6332ba522c-config\") pod \"kube-apiserver-operator-766d6c64bb-5f96t\" (UID: \"41343732-7cc4-4fe6-9435-8a6332ba522c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5f96t" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.546519 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/981ad6df-4f80-446c-83a8-cf8e4bc7436d-trusted-ca\") pod \"console-operator-58897d9998-7mfkz\" (UID: \"981ad6df-4f80-446c-83a8-cf8e4bc7436d\") " pod="openshift-console-operator/console-operator-58897d9998-7mfkz" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.546942 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a086194a-cb30-43f5-8cf2-f67d0f85a36d-etcd-service-ca\") pod \"etcd-operator-b45778765-hx8gq\" (UID: \"a086194a-cb30-43f5-8cf2-f67d0f85a36d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.547643 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23834dc7-7dcc-4a63-b666-3c1829da4cf4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-l8kzl\" (UID: \"23834dc7-7dcc-4a63-b666-3c1829da4cf4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l8kzl" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.547703 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-922ln\" (UID: \"9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-922ln" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.547666 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41343732-7cc4-4fe6-9435-8a6332ba522c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5f96t\" (UID: \"41343732-7cc4-4fe6-9435-8a6332ba522c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5f96t" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.548377 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a086194a-cb30-43f5-8cf2-f67d0f85a36d-config\") pod \"etcd-operator-b45778765-hx8gq\" (UID: \"a086194a-cb30-43f5-8cf2-f67d0f85a36d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.548410 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.548391 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a086194a-cb30-43f5-8cf2-f67d0f85a36d-etcd-client\") pod \"etcd-operator-b45778765-hx8gq\" (UID: \"a086194a-cb30-43f5-8cf2-f67d0f85a36d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.548739 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3667fc6c-078a-4be4-95ff-7174c74faf2c-metrics-tls\") pod \"dns-operator-744455d44c-c4m77\" (UID: \"3667fc6c-078a-4be4-95ff-7174c74faf2c\") " pod="openshift-dns-operator/dns-operator-744455d44c-c4m77" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.549851 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/450816f5-eb2f-44e6-9b62-fd3f3b2fbf48-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j7hs2\" (UID: \"450816f5-eb2f-44e6-9b62-fd3f3b2fbf48\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7hs2" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.550732 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-console-oauth-config\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.550913 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-console-serving-cert\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.551324 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd14f9a5-6dec-4df9-99a3-94c248ae4f60-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-sfsnw\" (UID: \"cd14f9a5-6dec-4df9-99a3-94c248ae4f60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sfsnw" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.551350 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-922ln\" (UID: \"9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-922ln" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.552498 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a086194a-cb30-43f5-8cf2-f67d0f85a36d-serving-cert\") pod \"etcd-operator-b45778765-hx8gq\" (UID: \"a086194a-cb30-43f5-8cf2-f67d0f85a36d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.554039 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/981ad6df-4f80-446c-83a8-cf8e4bc7436d-serving-cert\") pod \"console-operator-58897d9998-7mfkz\" (UID: \"981ad6df-4f80-446c-83a8-cf8e4bc7436d\") " pod="openshift-console-operator/console-operator-58897d9998-7mfkz" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.563571 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f25a5ec8-b28e-4b5c-a4f0-c6ad09e63b3d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5cs45\" (UID: \"f25a5ec8-b28e-4b5c-a4f0-c6ad09e63b3d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5cs45" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.568235 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.588219 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.593097 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f25a5ec8-b28e-4b5c-a4f0-c6ad09e63b3d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5cs45\" (UID: \"f25a5ec8-b28e-4b5c-a4f0-c6ad09e63b3d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5cs45" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.608505 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.628308 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.628378 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.628400 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.628789 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.628926 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.641859 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deda0ab1-f81e-4898-b4cb-5627947b5ed4-serving-cert\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.650027 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.652764 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/deda0ab1-f81e-4898-b4cb-5627947b5ed4-etcd-serving-ca\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.668992 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.678988 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/deda0ab1-f81e-4898-b4cb-5627947b5ed4-image-import-ca\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.698320 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.706333 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/deda0ab1-f81e-4898-b4cb-5627947b5ed4-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.709048 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.728937 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.749491 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.772638 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.789617 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.793588 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deda0ab1-f81e-4898-b4cb-5627947b5ed4-config\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.809447 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.821973 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/deda0ab1-f81e-4898-b4cb-5627947b5ed4-encryption-config\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.829119 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.837536 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/deda0ab1-f81e-4898-b4cb-5627947b5ed4-audit\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.849926 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.868902 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.889663 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.927952 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.930877 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.949601 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.969870 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.979920 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/05d54587-469b-407f-aa60-f0fdefb9dce7-metrics-tls\") pod \"ingress-operator-5b745b69d9-cpb6p\" (UID: \"05d54587-469b-407f-aa60-f0fdefb9dce7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.988970 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 27 18:47:52 crc kubenswrapper[4981]: I0227 18:47:52.997650 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/deda0ab1-f81e-4898-b4cb-5627947b5ed4-etcd-client\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.009405 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.037776 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.047482 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05d54587-469b-407f-aa60-f0fdefb9dce7-trusted-ca\") pod \"ingress-operator-5b745b69d9-cpb6p\" (UID: \"05d54587-469b-407f-aa60-f0fdefb9dce7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.049410 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.069496 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.089872 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.101941 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37a55d62-b540-4883-9548-3c0da02e8824-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2vmct\" (UID: \"37a55d62-b540-4883-9548-3c0da02e8824\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2vmct" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.109323 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.121957 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d1ccfd2-99ab-4caf-82d3-6b58656de39f-metrics-certs\") pod \"router-default-5444994796-r858c\" (UID: \"2d1ccfd2-99ab-4caf-82d3-6b58656de39f\") " pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.129150 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.136362 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37a55d62-b540-4883-9548-3c0da02e8824-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2vmct\" (UID: \"37a55d62-b540-4883-9548-3c0da02e8824\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2vmct" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.148624 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.157547 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2d1ccfd2-99ab-4caf-82d3-6b58656de39f-default-certificate\") pod \"router-default-5444994796-r858c\" (UID: \"2d1ccfd2-99ab-4caf-82d3-6b58656de39f\") " pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.169670 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.179478 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2d1ccfd2-99ab-4caf-82d3-6b58656de39f-stats-auth\") pod \"router-default-5444994796-r858c\" (UID: \"2d1ccfd2-99ab-4caf-82d3-6b58656de39f\") " pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.189455 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.198151 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d1ccfd2-99ab-4caf-82d3-6b58656de39f-service-ca-bundle\") pod \"router-default-5444994796-r858c\" (UID: \"2d1ccfd2-99ab-4caf-82d3-6b58656de39f\") " pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.210132 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.229030 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.249934 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.270017 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.289852 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.308911 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.320909 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9dac184-b115-4584-8344-d4cfea132d7d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-swwkp\" (UID: \"e9dac184-b115-4584-8344-d4cfea132d7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-swwkp" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.329192 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.337213 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9dac184-b115-4584-8344-d4cfea132d7d-config\") pod \"kube-controller-manager-operator-78b949d7b-swwkp\" (UID: \"e9dac184-b115-4584-8344-d4cfea132d7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-swwkp" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.347454 4981 request.go:700] Waited for 1.014172493s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&limit=500&resourceVersion=0 Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.349430 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.369702 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.389296 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.409010 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.417866 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d5b69559-dbbb-451e-8a89-0d8c61a363f3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bkfrt\" (UID: \"d5b69559-dbbb-451e-8a89-0d8c61a363f3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bkfrt" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.430007 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.449188 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.460516 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/23834dc7-7dcc-4a63-b666-3c1829da4cf4-proxy-tls\") pod \"machine-config-controller-84d6567774-l8kzl\" (UID: \"23834dc7-7dcc-4a63-b666-3c1829da4cf4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l8kzl" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.469542 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.488470 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.510187 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.530259 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.540410 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/01036322-73f2-4c61-b59d-ff9eff5d4b5f-srv-cert\") pod \"olm-operator-6b444d44fb-g2q7p\" (UID: \"01036322-73f2-4c61-b59d-ff9eff5d4b5f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p" Feb 27 18:47:53 crc kubenswrapper[4981]: E0227 18:47:53.542615 4981 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 27 18:47:53 crc kubenswrapper[4981]: E0227 18:47:53.542635 4981 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 27 18:47:53 crc kubenswrapper[4981]: E0227 18:47:53.542707 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9904cc6d-0e10-4a0b-bd3b-c0e6592b4856-webhook-certs podName:9904cc6d-0e10-4a0b-bd3b-c0e6592b4856 nodeName:}" failed. No retries permitted until 2026-02-27 18:47:54.042680649 +0000 UTC m=+173.521461849 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9904cc6d-0e10-4a0b-bd3b-c0e6592b4856-webhook-certs") pod "multus-admission-controller-857f4d67dd-xnkzj" (UID: "9904cc6d-0e10-4a0b-bd3b-c0e6592b4856") : failed to sync secret cache: timed out waiting for the condition Feb 27 18:47:53 crc kubenswrapper[4981]: E0227 18:47:53.542737 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2eb44459-26b2-48d3-931c-e718dda5133b-proxy-tls podName:2eb44459-26b2-48d3-931c-e718dda5133b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:54.042724751 +0000 UTC m=+173.521505941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/2eb44459-26b2-48d3-931c-e718dda5133b-proxy-tls") pod "machine-config-operator-74547568cd-v46wf" (UID: "2eb44459-26b2-48d3-931c-e718dda5133b") : failed to sync secret cache: timed out waiting for the condition Feb 27 18:47:53 crc kubenswrapper[4981]: E0227 18:47:53.545023 4981 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 27 18:47:53 crc kubenswrapper[4981]: E0227 18:47:53.545121 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2eb44459-26b2-48d3-931c-e718dda5133b-images podName:2eb44459-26b2-48d3-931c-e718dda5133b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:54.045099093 +0000 UTC m=+173.523880283 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/2eb44459-26b2-48d3-931c-e718dda5133b-images") pod "machine-config-operator-74547568cd-v46wf" (UID: "2eb44459-26b2-48d3-931c-e718dda5133b") : failed to sync configmap cache: timed out waiting for the condition Feb 27 18:47:53 crc kubenswrapper[4981]: E0227 18:47:53.545439 4981 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 27 18:47:53 crc kubenswrapper[4981]: E0227 18:47:53.545534 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7d035ce-f026-4668-9fca-c344f1fe60e3-secret-volume podName:b7d035ce-f026-4668-9fca-c344f1fe60e3 nodeName:}" failed. No retries permitted until 2026-02-27 18:47:54.045501875 +0000 UTC m=+173.524283075 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-volume" (UniqueName: "kubernetes.io/secret/b7d035ce-f026-4668-9fca-c344f1fe60e3-secret-volume") pod "collect-profiles-29536965-kh5p2" (UID: "b7d035ce-f026-4668-9fca-c344f1fe60e3") : failed to sync secret cache: timed out waiting for the condition Feb 27 18:47:53 crc kubenswrapper[4981]: E0227 18:47:53.545635 4981 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 27 18:47:53 crc kubenswrapper[4981]: E0227 18:47:53.545709 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01036322-73f2-4c61-b59d-ff9eff5d4b5f-profile-collector-cert podName:01036322-73f2-4c61-b59d-ff9eff5d4b5f nodeName:}" failed. No retries permitted until 2026-02-27 18:47:54.045691501 +0000 UTC m=+173.524472701 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/01036322-73f2-4c61-b59d-ff9eff5d4b5f-profile-collector-cert") pod "olm-operator-6b444d44fb-g2q7p" (UID: "01036322-73f2-4c61-b59d-ff9eff5d4b5f") : failed to sync secret cache: timed out waiting for the condition Feb 27 18:47:53 crc kubenswrapper[4981]: E0227 18:47:53.546573 4981 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 27 18:47:53 crc kubenswrapper[4981]: E0227 18:47:53.546626 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3b236af-9b1b-45f2-8780-ea9d4726bd0f-srv-cert podName:b3b236af-9b1b-45f2-8780-ea9d4726bd0f nodeName:}" failed. No retries permitted until 2026-02-27 18:47:54.046609469 +0000 UTC m=+173.525390659 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b3b236af-9b1b-45f2-8780-ea9d4726bd0f-srv-cert") pod "catalog-operator-68c6474976-85vdw" (UID: "b3b236af-9b1b-45f2-8780-ea9d4726bd0f") : failed to sync secret cache: timed out waiting for the condition Feb 27 18:47:53 crc kubenswrapper[4981]: E0227 18:47:53.547251 4981 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Feb 27 18:47:53 crc kubenswrapper[4981]: E0227 18:47:53.547321 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b7d035ce-f026-4668-9fca-c344f1fe60e3-config-volume podName:b7d035ce-f026-4668-9fca-c344f1fe60e3 nodeName:}" failed. No retries permitted until 2026-02-27 18:47:54.04730245 +0000 UTC m=+173.526083640 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b7d035ce-f026-4668-9fca-c344f1fe60e3-config-volume") pod "collect-profiles-29536965-kh5p2" (UID: "b7d035ce-f026-4668-9fca-c344f1fe60e3") : failed to sync configmap cache: timed out waiting for the condition Feb 27 18:47:53 crc kubenswrapper[4981]: E0227 18:47:53.547449 4981 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 27 18:47:53 crc kubenswrapper[4981]: E0227 18:47:53.547499 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3b236af-9b1b-45f2-8780-ea9d4726bd0f-profile-collector-cert podName:b3b236af-9b1b-45f2-8780-ea9d4726bd0f nodeName:}" failed. No retries permitted until 2026-02-27 18:47:54.047484636 +0000 UTC m=+173.526265826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/b3b236af-9b1b-45f2-8780-ea9d4726bd0f-profile-collector-cert") pod "catalog-operator-68c6474976-85vdw" (UID: "b3b236af-9b1b-45f2-8780-ea9d4726bd0f") : failed to sync secret cache: timed out waiting for the condition Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.549775 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.568936 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.588665 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.608820 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.629555 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.649337 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.668762 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.689663 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.729809 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.748935 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.769685 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.789575 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.829752 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.850417 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.869453 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.889589 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.910356 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.930442 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.950134 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.969348 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 27 18:47:53 crc kubenswrapper[4981]: I0227 18:47:53.989376 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.009327 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.030227 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.051999 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.069004 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.074877 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7d035ce-f026-4668-9fca-c344f1fe60e3-config-volume\") pod \"collect-profiles-29536965-kh5p2\" (UID: \"b7d035ce-f026-4668-9fca-c344f1fe60e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.074946 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3b236af-9b1b-45f2-8780-ea9d4726bd0f-srv-cert\") pod \"catalog-operator-68c6474976-85vdw\" (UID: \"b3b236af-9b1b-45f2-8780-ea9d4726bd0f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.074985 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3b236af-9b1b-45f2-8780-ea9d4726bd0f-profile-collector-cert\") pod \"catalog-operator-68c6474976-85vdw\" (UID: \"b3b236af-9b1b-45f2-8780-ea9d4726bd0f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.075089 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9904cc6d-0e10-4a0b-bd3b-c0e6592b4856-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xnkzj\" (UID: \"9904cc6d-0e10-4a0b-bd3b-c0e6592b4856\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xnkzj" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.075246 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7d035ce-f026-4668-9fca-c344f1fe60e3-secret-volume\") pod \"collect-profiles-29536965-kh5p2\" (UID: \"b7d035ce-f026-4668-9fca-c344f1fe60e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.075292 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2eb44459-26b2-48d3-931c-e718dda5133b-proxy-tls\") pod \"machine-config-operator-74547568cd-v46wf\" (UID: \"2eb44459-26b2-48d3-931c-e718dda5133b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.075359 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2eb44459-26b2-48d3-931c-e718dda5133b-images\") pod \"machine-config-operator-74547568cd-v46wf\" (UID: \"2eb44459-26b2-48d3-931c-e718dda5133b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.075674 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/01036322-73f2-4c61-b59d-ff9eff5d4b5f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g2q7p\" (UID: \"01036322-73f2-4c61-b59d-ff9eff5d4b5f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.077597 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7d035ce-f026-4668-9fca-c344f1fe60e3-config-volume\") pod \"collect-profiles-29536965-kh5p2\" (UID: \"b7d035ce-f026-4668-9fca-c344f1fe60e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.077883 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2eb44459-26b2-48d3-931c-e718dda5133b-images\") pod \"machine-config-operator-74547568cd-v46wf\" (UID: \"2eb44459-26b2-48d3-931c-e718dda5133b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.082764 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3b236af-9b1b-45f2-8780-ea9d4726bd0f-srv-cert\") pod \"catalog-operator-68c6474976-85vdw\" (UID: \"b3b236af-9b1b-45f2-8780-ea9d4726bd0f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.082769 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3b236af-9b1b-45f2-8780-ea9d4726bd0f-profile-collector-cert\") pod \"catalog-operator-68c6474976-85vdw\" (UID: \"b3b236af-9b1b-45f2-8780-ea9d4726bd0f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.083608 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9904cc6d-0e10-4a0b-bd3b-c0e6592b4856-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xnkzj\" (UID: \"9904cc6d-0e10-4a0b-bd3b-c0e6592b4856\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xnkzj" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.083758 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/01036322-73f2-4c61-b59d-ff9eff5d4b5f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g2q7p\" (UID: \"01036322-73f2-4c61-b59d-ff9eff5d4b5f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.083833 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7d035ce-f026-4668-9fca-c344f1fe60e3-secret-volume\") pod \"collect-profiles-29536965-kh5p2\" (UID: \"b7d035ce-f026-4668-9fca-c344f1fe60e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.084844 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2eb44459-26b2-48d3-931c-e718dda5133b-proxy-tls\") pod \"machine-config-operator-74547568cd-v46wf\" (UID: \"2eb44459-26b2-48d3-931c-e718dda5133b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.090107 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.109312 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.128965 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.149375 4981 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.169168 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.189284 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.209466 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.229134 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.249432 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.269524 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.318993 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqr8k\" (UniqueName: \"kubernetes.io/projected/c616c83f-0616-4a1d-b2ac-69cdc88eef70-kube-api-access-wqr8k\") pod \"authentication-operator-69f744f599-9h9p4\" (UID: \"c616c83f-0616-4a1d-b2ac-69cdc88eef70\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9h9p4" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.337443 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhw69\" (UniqueName: \"kubernetes.io/projected/bbbe9d1b-2a55-4b34-b452-32f51eef3278-kube-api-access-mhw69\") pod \"openshift-config-operator-7777fb866f-h8fzp\" (UID: \"bbbe9d1b-2a55-4b34-b452-32f51eef3278\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-h8fzp" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.356565 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58j65\" (UniqueName: \"kubernetes.io/projected/3bedbc35-5c52-4c25-a77b-dcbc4a5dbc21-kube-api-access-58j65\") pod \"openshift-apiserver-operator-796bbdcf4f-h885x\" (UID: \"3bedbc35-5c52-4c25-a77b-dcbc4a5dbc21\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h885x" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.367085 4981 request.go:700] Waited for 1.928610974s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.375757 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgbg7\" (UniqueName: \"kubernetes.io/projected/10e31a3f-eb88-4c8b-93e7-e251f762d29e-kube-api-access-lgbg7\") pod \"route-controller-manager-6576b87f9c-ns4jv\" (UID: \"10e31a3f-eb88-4c8b-93e7-e251f762d29e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.393402 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrlns\" (UniqueName: \"kubernetes.io/projected/1107c99b-98a7-4103-9e6c-dde234daacaf-kube-api-access-mrlns\") pod \"machine-api-operator-5694c8668f-jdhvt\" (UID: \"1107c99b-98a7-4103-9e6c-dde234daacaf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jdhvt" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.407450 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h885x" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.415181 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzdvc\" (UniqueName: \"kubernetes.io/projected/46dc2075-e24c-46cf-9885-3de46322461d-kube-api-access-hzdvc\") pod \"apiserver-7bbb656c7d-ckjzg\" (UID: \"46dc2075-e24c-46cf-9885-3de46322461d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.434912 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf5mb\" (UniqueName: \"kubernetes.io/projected/86f8ab04-83b9-497b-a8b4-cde27e61d568-kube-api-access-nf5mb\") pod \"oauth-openshift-558db77b4-v4vk8\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.441636 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9h9p4" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.460727 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.462680 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v87jv\" (UniqueName: \"kubernetes.io/projected/8071e59a-5e70-4dac-b36c-e17dd74f75b0-kube-api-access-v87jv\") pod \"machine-approver-56656f9798-xmtml\" (UID: \"8071e59a-5e70-4dac-b36c-e17dd74f75b0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xmtml" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.476352 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdvf5\" (UniqueName: \"kubernetes.io/projected/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-kube-api-access-rdvf5\") pod \"controller-manager-879f6c89f-rn6m5\" (UID: \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.487972 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h8fzp" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.490818 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2h8x\" (UniqueName: \"kubernetes.io/projected/9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc-kube-api-access-g2h8x\") pod \"cluster-image-registry-operator-dc59b4c8b-922ln\" (UID: \"9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-922ln" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.493609 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.519440 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89kz9\" (UniqueName: \"kubernetes.io/projected/79fd2558-1055-4895-9db5-58da8eb8aacf-kube-api-access-89kz9\") pod \"migrator-59844c95c7-qb2m5\" (UID: \"79fd2558-1055-4895-9db5-58da8eb8aacf\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qb2m5" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.530990 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tckjt\" (UniqueName: \"kubernetes.io/projected/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-kube-api-access-tckjt\") pod \"console-f9d7485db-dllzn\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.548430 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjbj5\" (UniqueName: \"kubernetes.io/projected/f25a5ec8-b28e-4b5c-a4f0-c6ad09e63b3d-kube-api-access-mjbj5\") pod \"kube-storage-version-migrator-operator-b67b599dd-5cs45\" (UID: \"f25a5ec8-b28e-4b5c-a4f0-c6ad09e63b3d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5cs45" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.571296 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bptxk\" (UniqueName: \"kubernetes.io/projected/450816f5-eb2f-44e6-9b62-fd3f3b2fbf48-kube-api-access-bptxk\") pod \"cluster-samples-operator-665b6dd947-j7hs2\" (UID: \"450816f5-eb2f-44e6-9b62-fd3f3b2fbf48\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7hs2" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.592414 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94hqc\" (UniqueName: \"kubernetes.io/projected/23834dc7-7dcc-4a63-b666-3c1829da4cf4-kube-api-access-94hqc\") pod \"machine-config-controller-84d6567774-l8kzl\" (UID: \"23834dc7-7dcc-4a63-b666-3c1829da4cf4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l8kzl" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.596860 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5cs45" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.608285 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9dac184-b115-4584-8344-d4cfea132d7d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-swwkp\" (UID: \"e9dac184-b115-4584-8344-d4cfea132d7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-swwkp" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.628743 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh69n\" (UniqueName: \"kubernetes.io/projected/b3b236af-9b1b-45f2-8780-ea9d4726bd0f-kube-api-access-fh69n\") pod \"catalog-operator-68c6474976-85vdw\" (UID: \"b3b236af-9b1b-45f2-8780-ea9d4726bd0f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.629079 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.642343 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-swwkp" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.645543 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qb2m5" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.651991 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th628\" (UniqueName: \"kubernetes.io/projected/b7d035ce-f026-4668-9fca-c344f1fe60e3-kube-api-access-th628\") pod \"collect-profiles-29536965-kh5p2\" (UID: \"b7d035ce-f026-4668-9fca-c344f1fe60e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.663028 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m49pz\" (UniqueName: \"kubernetes.io/projected/2d1ccfd2-99ab-4caf-82d3-6b58656de39f-kube-api-access-m49pz\") pod \"router-default-5444994796-r858c\" (UID: \"2d1ccfd2-99ab-4caf-82d3-6b58656de39f\") " pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.663656 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jdhvt" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.680780 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.686468 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l8kzl" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.690433 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41343732-7cc4-4fe6-9435-8a6332ba522c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5f96t\" (UID: \"41343732-7cc4-4fe6-9435-8a6332ba522c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5f96t" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.702926 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h475c\" (UniqueName: \"kubernetes.io/projected/2eb44459-26b2-48d3-931c-e718dda5133b-kube-api-access-h475c\") pod \"machine-config-operator-74547568cd-v46wf\" (UID: \"2eb44459-26b2-48d3-931c-e718dda5133b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.710651 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.723940 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49xsz\" (UniqueName: \"kubernetes.io/projected/a086194a-cb30-43f5-8cf2-f67d0f85a36d-kube-api-access-49xsz\") pod \"etcd-operator-b45778765-hx8gq\" (UID: \"a086194a-cb30-43f5-8cf2-f67d0f85a36d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.732304 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xmtml" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.736097 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h885x"] Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.744317 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w45dz\" (UniqueName: \"kubernetes.io/projected/01036322-73f2-4c61-b59d-ff9eff5d4b5f-kube-api-access-w45dz\") pod \"olm-operator-6b444d44fb-g2q7p\" (UID: \"01036322-73f2-4c61-b59d-ff9eff5d4b5f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.776948 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfzkt\" (UniqueName: \"kubernetes.io/projected/32d9179a-38c6-482f-95be-c94b48b83856-kube-api-access-nfzkt\") pod \"downloads-7954f5f757-t9grl\" (UID: \"32d9179a-38c6-482f-95be-c94b48b83856\") " pod="openshift-console/downloads-7954f5f757-t9grl" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.784251 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.787712 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-922ln\" (UID: \"9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-922ln" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.804174 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwjkt\" (UniqueName: \"kubernetes.io/projected/05d54587-469b-407f-aa60-f0fdefb9dce7-kube-api-access-zwjkt\") pod \"ingress-operator-5b745b69d9-cpb6p\" (UID: \"05d54587-469b-407f-aa60-f0fdefb9dce7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.817922 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7hs2" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.824441 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.832004 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-t9grl" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.839388 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x9zw\" (UniqueName: \"kubernetes.io/projected/deda0ab1-f81e-4898-b4cb-5627947b5ed4-kube-api-access-5x9zw\") pod \"apiserver-76f77b778f-g8dqb\" (UID: \"deda0ab1-f81e-4898-b4cb-5627947b5ed4\") " pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.853938 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm7pd\" (UniqueName: \"kubernetes.io/projected/981ad6df-4f80-446c-83a8-cf8e4bc7436d-kube-api-access-fm7pd\") pod \"console-operator-58897d9998-7mfkz\" (UID: \"981ad6df-4f80-446c-83a8-cf8e4bc7436d\") " pod="openshift-console-operator/console-operator-58897d9998-7mfkz" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.869368 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.869716 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-922ln" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.870090 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6ngw\" (UniqueName: \"kubernetes.io/projected/cd14f9a5-6dec-4df9-99a3-94c248ae4f60-kube-api-access-s6ngw\") pod \"openshift-controller-manager-operator-756b6f6bc6-sfsnw\" (UID: \"cd14f9a5-6dec-4df9-99a3-94c248ae4f60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sfsnw" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.881641 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5f96t" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.882992 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-h8fzp"] Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.890736 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv654\" (UniqueName: \"kubernetes.io/projected/9904cc6d-0e10-4a0b-bd3b-c0e6592b4856-kube-api-access-cv654\") pod \"multus-admission-controller-857f4d67dd-xnkzj\" (UID: \"9904cc6d-0e10-4a0b-bd3b-c0e6592b4856\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xnkzj" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.904202 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.905954 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgfq4\" (UniqueName: \"kubernetes.io/projected/d5b69559-dbbb-451e-8a89-0d8c61a363f3-kube-api-access-pgfq4\") pod \"control-plane-machine-set-operator-78cbb6b69f-bkfrt\" (UID: \"d5b69559-dbbb-451e-8a89-0d8c61a363f3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bkfrt" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.925600 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05d54587-469b-407f-aa60-f0fdefb9dce7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cpb6p\" (UID: \"05d54587-469b-407f-aa60-f0fdefb9dce7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.931951 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.943626 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mj9r\" (UniqueName: \"kubernetes.io/projected/3667fc6c-078a-4be4-95ff-7174c74faf2c-kube-api-access-8mj9r\") pod \"dns-operator-744455d44c-c4m77\" (UID: \"3667fc6c-078a-4be4-95ff-7174c74faf2c\") " pod="openshift-dns-operator/dns-operator-744455d44c-c4m77" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.961498 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv"] Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.967317 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37a55d62-b540-4883-9548-3c0da02e8824-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2vmct\" (UID: \"37a55d62-b540-4883-9548-3c0da02e8824\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2vmct" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.969899 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.980796 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bkfrt" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.984608 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg"] Feb 27 18:47:54 crc kubenswrapper[4981]: W0227 18:47:54.992774 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10e31a3f_eb88_4c8b_93e7_e251f762d29e.slice/crio-8a15199296ee77b1150176cf676274ec35958768c4c01a43d4aabf77f7f659ff WatchSource:0}: Error finding container 8a15199296ee77b1150176cf676274ec35958768c4c01a43d4aabf77f7f659ff: Status 404 returned error can't find the container with id 8a15199296ee77b1150176cf676274ec35958768c4c01a43d4aabf77f7f659ff Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.992984 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.993563 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 27 18:47:54 crc kubenswrapper[4981]: I0227 18:47:54.996182 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-xnkzj" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.005529 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qb2m5"] Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.006846 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.010880 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.029870 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9h9p4"] Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.030185 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.048536 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.069814 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.101601 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sfsnw" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.110378 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-7mfkz" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.133348 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5cs45"] Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.141201 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rn6m5"] Feb 27 18:47:55 crc kubenswrapper[4981]: W0227 18:47:55.181573 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf25a5ec8_b28e_4b5c_a4f0_c6ad09e63b3d.slice/crio-8933672611ae7c674d6f40e0f2a3ce25d87fb9e4bb5f45fe10ab607eb9ab8dbd WatchSource:0}: Error finding container 8933672611ae7c674d6f40e0f2a3ce25d87fb9e4bb5f45fe10ab607eb9ab8dbd: Status 404 returned error can't find the container with id 8933672611ae7c674d6f40e0f2a3ce25d87fb9e4bb5f45fe10ab607eb9ab8dbd Feb 27 18:47:55 crc kubenswrapper[4981]: W0227 18:47:55.185875 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89fbbf29_4a7d_40d3_ad12_9e1111396e8d.slice/crio-d8f2b86c6ee8c6c5f717e9689e29bbe68b5de87293db2a266267c873d0ceab26 WatchSource:0}: Error finding container d8f2b86c6ee8c6c5f717e9689e29bbe68b5de87293db2a266267c873d0ceab26: Status 404 returned error can't find the container with id d8f2b86c6ee8c6c5f717e9689e29bbe68b5de87293db2a266267c873d0ceab26 Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.187443 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-c4m77" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.196481 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a86208a8-d898-447f-ba80-f6b72f601ef0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.196521 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ef632318-2ac5-418d-b9d4-dcd616b4d768-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pcmgp\" (UID: \"ef632318-2ac5-418d-b9d4-dcd616b4d768\") " pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.196553 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a86208a8-d898-447f-ba80-f6b72f601ef0-bound-sa-token\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.196579 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.196680 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7hmb\" (UniqueName: \"kubernetes.io/projected/ef632318-2ac5-418d-b9d4-dcd616b4d768-kube-api-access-s7hmb\") pod \"marketplace-operator-79b997595-pcmgp\" (UID: \"ef632318-2ac5-418d-b9d4-dcd616b4d768\") " pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.196780 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a86208a8-d898-447f-ba80-f6b72f601ef0-registry-certificates\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: E0227 18:47:55.196831 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:55.696818392 +0000 UTC m=+175.175599552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.196850 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a86208a8-d898-447f-ba80-f6b72f601ef0-registry-tls\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.196870 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a86208a8-d898-447f-ba80-f6b72f601ef0-trusted-ca\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.196887 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57gr5\" (UniqueName: \"kubernetes.io/projected/a86208a8-d898-447f-ba80-f6b72f601ef0-kube-api-access-57gr5\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.196910 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a86208a8-d898-447f-ba80-f6b72f601ef0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.196939 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef632318-2ac5-418d-b9d4-dcd616b4d768-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pcmgp\" (UID: \"ef632318-2ac5-418d-b9d4-dcd616b4d768\") " pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.219629 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.224192 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2vmct" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.269020 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jdhvt"] Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.279699 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-swwkp"] Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.280407 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-l8kzl"] Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.298540 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:55 crc kubenswrapper[4981]: E0227 18:47:55.300416 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:55.800398092 +0000 UTC m=+175.279179252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.308243 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/310c5674-0397-4720-b996-d50d28ebc783-webhook-cert\") pod \"packageserver-d55dfcdfc-plxr2\" (UID: \"310c5674-0397-4720-b996-d50d28ebc783\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.308281 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f5ecc4da-0cfa-4632-8478-e48a3c8aba36-mountpoint-dir\") pod \"csi-hostpathplugin-7r9r7\" (UID: \"f5ecc4da-0cfa-4632-8478-e48a3c8aba36\") " pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.308358 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/310c5674-0397-4720-b996-d50d28ebc783-apiservice-cert\") pod \"packageserver-d55dfcdfc-plxr2\" (UID: \"310c5674-0397-4720-b996-d50d28ebc783\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.308416 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8977aa66-1107-4ab9-a774-0e6cebffd78f-certs\") pod \"machine-config-server-s7wqk\" (UID: \"8977aa66-1107-4ab9-a774-0e6cebffd78f\") " pod="openshift-machine-config-operator/machine-config-server-s7wqk" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.308444 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a86208a8-d898-447f-ba80-f6b72f601ef0-registry-certificates\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.308572 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ee74fec8-5dc9-4e29-8a9c-1ac61fc34a49-signing-key\") pod \"service-ca-9c57cc56f-5pgcp\" (UID: \"ee74fec8-5dc9-4e29-8a9c-1ac61fc34a49\") " pod="openshift-service-ca/service-ca-9c57cc56f-5pgcp" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.308640 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ee74fec8-5dc9-4e29-8a9c-1ac61fc34a49-signing-cabundle\") pod \"service-ca-9c57cc56f-5pgcp\" (UID: \"ee74fec8-5dc9-4e29-8a9c-1ac61fc34a49\") " pod="openshift-service-ca/service-ca-9c57cc56f-5pgcp" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.308660 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg64m\" (UniqueName: \"kubernetes.io/projected/f7aedfc8-3056-44a8-9389-f098e2bc39ec-kube-api-access-pg64m\") pod \"ingress-canary-n7dd8\" (UID: \"f7aedfc8-3056-44a8-9389-f098e2bc39ec\") " pod="openshift-ingress-canary/ingress-canary-n7dd8" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.308715 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df20cd9b-2dce-4295-84fb-d62f277eea32-serving-cert\") pod \"service-ca-operator-777779d784-v5c96\" (UID: \"df20cd9b-2dce-4295-84fb-d62f277eea32\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v5c96" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.308752 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a86208a8-d898-447f-ba80-f6b72f601ef0-registry-tls\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.308768 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57gr5\" (UniqueName: \"kubernetes.io/projected/a86208a8-d898-447f-ba80-f6b72f601ef0-kube-api-access-57gr5\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.308785 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1411e866-be2e-40c6-af1a-7a8ddca854d3-metrics-tls\") pod \"dns-default-gv2d7\" (UID: \"1411e866-be2e-40c6-af1a-7a8ddca854d3\") " pod="openshift-dns/dns-default-gv2d7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.308818 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a86208a8-d898-447f-ba80-f6b72f601ef0-trusted-ca\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.308859 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a86208a8-d898-447f-ba80-f6b72f601ef0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309003 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef632318-2ac5-418d-b9d4-dcd616b4d768-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pcmgp\" (UID: \"ef632318-2ac5-418d-b9d4-dcd616b4d768\") " pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309044 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f5ecc4da-0cfa-4632-8478-e48a3c8aba36-registration-dir\") pod \"csi-hostpathplugin-7r9r7\" (UID: \"f5ecc4da-0cfa-4632-8478-e48a3c8aba36\") " pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309094 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a86208a8-d898-447f-ba80-f6b72f601ef0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309122 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5snm\" (UniqueName: \"kubernetes.io/projected/df20cd9b-2dce-4295-84fb-d62f277eea32-kube-api-access-q5snm\") pod \"service-ca-operator-777779d784-v5c96\" (UID: \"df20cd9b-2dce-4295-84fb-d62f277eea32\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v5c96" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309147 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f5ecc4da-0cfa-4632-8478-e48a3c8aba36-plugins-dir\") pod \"csi-hostpathplugin-7r9r7\" (UID: \"f5ecc4da-0cfa-4632-8478-e48a3c8aba36\") " pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309172 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7e7ee0e-64a2-45c2-9ed6-82998fd8687d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fqktd\" (UID: \"f7e7ee0e-64a2-45c2-9ed6-82998fd8687d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqktd" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309187 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4r4t\" (UniqueName: \"kubernetes.io/projected/f5ecc4da-0cfa-4632-8478-e48a3c8aba36-kube-api-access-f4r4t\") pod \"csi-hostpathplugin-7r9r7\" (UID: \"f5ecc4da-0cfa-4632-8478-e48a3c8aba36\") " pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309212 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ef632318-2ac5-418d-b9d4-dcd616b4d768-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pcmgp\" (UID: \"ef632318-2ac5-418d-b9d4-dcd616b4d768\") " pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309266 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/310c5674-0397-4720-b996-d50d28ebc783-tmpfs\") pod \"packageserver-d55dfcdfc-plxr2\" (UID: \"310c5674-0397-4720-b996-d50d28ebc783\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309283 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6z24\" (UniqueName: \"kubernetes.io/projected/310c5674-0397-4720-b996-d50d28ebc783-kube-api-access-r6z24\") pod \"packageserver-d55dfcdfc-plxr2\" (UID: \"310c5674-0397-4720-b996-d50d28ebc783\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309327 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxnpx\" (UniqueName: \"kubernetes.io/projected/f7e7ee0e-64a2-45c2-9ed6-82998fd8687d-kube-api-access-cxnpx\") pod \"package-server-manager-789f6589d5-fqktd\" (UID: \"f7e7ee0e-64a2-45c2-9ed6-82998fd8687d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqktd" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309384 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a86208a8-d898-447f-ba80-f6b72f601ef0-bound-sa-token\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309412 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlq82\" (UniqueName: \"kubernetes.io/projected/1411e866-be2e-40c6-af1a-7a8ddca854d3-kube-api-access-mlq82\") pod \"dns-default-gv2d7\" (UID: \"1411e866-be2e-40c6-af1a-7a8ddca854d3\") " pod="openshift-dns/dns-default-gv2d7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309427 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7aedfc8-3056-44a8-9389-f098e2bc39ec-cert\") pod \"ingress-canary-n7dd8\" (UID: \"f7aedfc8-3056-44a8-9389-f098e2bc39ec\") " pod="openshift-ingress-canary/ingress-canary-n7dd8" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309450 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309467 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f5ecc4da-0cfa-4632-8478-e48a3c8aba36-socket-dir\") pod \"csi-hostpathplugin-7r9r7\" (UID: \"f5ecc4da-0cfa-4632-8478-e48a3c8aba36\") " pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309534 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1411e866-be2e-40c6-af1a-7a8ddca854d3-config-volume\") pod \"dns-default-gv2d7\" (UID: \"1411e866-be2e-40c6-af1a-7a8ddca854d3\") " pod="openshift-dns/dns-default-gv2d7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309713 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnwdt\" (UniqueName: \"kubernetes.io/projected/8977aa66-1107-4ab9-a774-0e6cebffd78f-kube-api-access-bnwdt\") pod \"machine-config-server-s7wqk\" (UID: \"8977aa66-1107-4ab9-a774-0e6cebffd78f\") " pod="openshift-machine-config-operator/machine-config-server-s7wqk" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309732 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df20cd9b-2dce-4295-84fb-d62f277eea32-config\") pod \"service-ca-operator-777779d784-v5c96\" (UID: \"df20cd9b-2dce-4295-84fb-d62f277eea32\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v5c96" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309769 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7hmb\" (UniqueName: \"kubernetes.io/projected/ef632318-2ac5-418d-b9d4-dcd616b4d768-kube-api-access-s7hmb\") pod \"marketplace-operator-79b997595-pcmgp\" (UID: \"ef632318-2ac5-418d-b9d4-dcd616b4d768\") " pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309785 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4zf2\" (UniqueName: \"kubernetes.io/projected/ee74fec8-5dc9-4e29-8a9c-1ac61fc34a49-kube-api-access-z4zf2\") pod \"service-ca-9c57cc56f-5pgcp\" (UID: \"ee74fec8-5dc9-4e29-8a9c-1ac61fc34a49\") " pod="openshift-service-ca/service-ca-9c57cc56f-5pgcp" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309815 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8977aa66-1107-4ab9-a774-0e6cebffd78f-node-bootstrap-token\") pod \"machine-config-server-s7wqk\" (UID: \"8977aa66-1107-4ab9-a774-0e6cebffd78f\") " pod="openshift-machine-config-operator/machine-config-server-s7wqk" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.309836 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f5ecc4da-0cfa-4632-8478-e48a3c8aba36-csi-data-dir\") pod \"csi-hostpathplugin-7r9r7\" (UID: \"f5ecc4da-0cfa-4632-8478-e48a3c8aba36\") " pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.313569 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a86208a8-d898-447f-ba80-f6b72f601ef0-registry-certificates\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.326154 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a86208a8-d898-447f-ba80-f6b72f601ef0-trusted-ca\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.326179 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef632318-2ac5-418d-b9d4-dcd616b4d768-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pcmgp\" (UID: \"ef632318-2ac5-418d-b9d4-dcd616b4d768\") " pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" Feb 27 18:47:55 crc kubenswrapper[4981]: E0227 18:47:55.326404 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:55.82638893 +0000 UTC m=+175.305170090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.327300 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a86208a8-d898-447f-ba80-f6b72f601ef0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.335611 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a86208a8-d898-447f-ba80-f6b72f601ef0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.345714 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a86208a8-d898-447f-ba80-f6b72f601ef0-registry-tls\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.356172 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ef632318-2ac5-418d-b9d4-dcd616b4d768-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pcmgp\" (UID: \"ef632318-2ac5-418d-b9d4-dcd616b4d768\") " pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.357772 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7hmb\" (UniqueName: \"kubernetes.io/projected/ef632318-2ac5-418d-b9d4-dcd616b4d768-kube-api-access-s7hmb\") pod \"marketplace-operator-79b997595-pcmgp\" (UID: \"ef632318-2ac5-418d-b9d4-dcd616b4d768\") " pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.380988 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v4vk8"] Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.389232 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a86208a8-d898-447f-ba80-f6b72f601ef0-bound-sa-token\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.406726 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57gr5\" (UniqueName: \"kubernetes.io/projected/a86208a8-d898-447f-ba80-f6b72f601ef0-kube-api-access-57gr5\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410463 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410614 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f5ecc4da-0cfa-4632-8478-e48a3c8aba36-plugins-dir\") pod \"csi-hostpathplugin-7r9r7\" (UID: \"f5ecc4da-0cfa-4632-8478-e48a3c8aba36\") " pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410659 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7e7ee0e-64a2-45c2-9ed6-82998fd8687d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fqktd\" (UID: \"f7e7ee0e-64a2-45c2-9ed6-82998fd8687d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqktd" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410680 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4r4t\" (UniqueName: \"kubernetes.io/projected/f5ecc4da-0cfa-4632-8478-e48a3c8aba36-kube-api-access-f4r4t\") pod \"csi-hostpathplugin-7r9r7\" (UID: \"f5ecc4da-0cfa-4632-8478-e48a3c8aba36\") " pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410704 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/310c5674-0397-4720-b996-d50d28ebc783-tmpfs\") pod \"packageserver-d55dfcdfc-plxr2\" (UID: \"310c5674-0397-4720-b996-d50d28ebc783\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410721 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6z24\" (UniqueName: \"kubernetes.io/projected/310c5674-0397-4720-b996-d50d28ebc783-kube-api-access-r6z24\") pod \"packageserver-d55dfcdfc-plxr2\" (UID: \"310c5674-0397-4720-b996-d50d28ebc783\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410743 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxnpx\" (UniqueName: \"kubernetes.io/projected/f7e7ee0e-64a2-45c2-9ed6-82998fd8687d-kube-api-access-cxnpx\") pod \"package-server-manager-789f6589d5-fqktd\" (UID: \"f7e7ee0e-64a2-45c2-9ed6-82998fd8687d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqktd" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410764 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlq82\" (UniqueName: \"kubernetes.io/projected/1411e866-be2e-40c6-af1a-7a8ddca854d3-kube-api-access-mlq82\") pod \"dns-default-gv2d7\" (UID: \"1411e866-be2e-40c6-af1a-7a8ddca854d3\") " pod="openshift-dns/dns-default-gv2d7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410780 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7aedfc8-3056-44a8-9389-f098e2bc39ec-cert\") pod \"ingress-canary-n7dd8\" (UID: \"f7aedfc8-3056-44a8-9389-f098e2bc39ec\") " pod="openshift-ingress-canary/ingress-canary-n7dd8" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410801 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f5ecc4da-0cfa-4632-8478-e48a3c8aba36-socket-dir\") pod \"csi-hostpathplugin-7r9r7\" (UID: \"f5ecc4da-0cfa-4632-8478-e48a3c8aba36\") " pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410819 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1411e866-be2e-40c6-af1a-7a8ddca854d3-config-volume\") pod \"dns-default-gv2d7\" (UID: \"1411e866-be2e-40c6-af1a-7a8ddca854d3\") " pod="openshift-dns/dns-default-gv2d7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410844 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4zf2\" (UniqueName: \"kubernetes.io/projected/ee74fec8-5dc9-4e29-8a9c-1ac61fc34a49-kube-api-access-z4zf2\") pod \"service-ca-9c57cc56f-5pgcp\" (UID: \"ee74fec8-5dc9-4e29-8a9c-1ac61fc34a49\") " pod="openshift-service-ca/service-ca-9c57cc56f-5pgcp" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410863 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnwdt\" (UniqueName: \"kubernetes.io/projected/8977aa66-1107-4ab9-a774-0e6cebffd78f-kube-api-access-bnwdt\") pod \"machine-config-server-s7wqk\" (UID: \"8977aa66-1107-4ab9-a774-0e6cebffd78f\") " pod="openshift-machine-config-operator/machine-config-server-s7wqk" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410878 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df20cd9b-2dce-4295-84fb-d62f277eea32-config\") pod \"service-ca-operator-777779d784-v5c96\" (UID: \"df20cd9b-2dce-4295-84fb-d62f277eea32\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v5c96" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410893 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f5ecc4da-0cfa-4632-8478-e48a3c8aba36-csi-data-dir\") pod \"csi-hostpathplugin-7r9r7\" (UID: \"f5ecc4da-0cfa-4632-8478-e48a3c8aba36\") " pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410907 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8977aa66-1107-4ab9-a774-0e6cebffd78f-node-bootstrap-token\") pod \"machine-config-server-s7wqk\" (UID: \"8977aa66-1107-4ab9-a774-0e6cebffd78f\") " pod="openshift-machine-config-operator/machine-config-server-s7wqk" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410928 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f5ecc4da-0cfa-4632-8478-e48a3c8aba36-mountpoint-dir\") pod \"csi-hostpathplugin-7r9r7\" (UID: \"f5ecc4da-0cfa-4632-8478-e48a3c8aba36\") " pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410942 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/310c5674-0397-4720-b996-d50d28ebc783-webhook-cert\") pod \"packageserver-d55dfcdfc-plxr2\" (UID: \"310c5674-0397-4720-b996-d50d28ebc783\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410955 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/310c5674-0397-4720-b996-d50d28ebc783-apiservice-cert\") pod \"packageserver-d55dfcdfc-plxr2\" (UID: \"310c5674-0397-4720-b996-d50d28ebc783\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410977 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8977aa66-1107-4ab9-a774-0e6cebffd78f-certs\") pod \"machine-config-server-s7wqk\" (UID: \"8977aa66-1107-4ab9-a774-0e6cebffd78f\") " pod="openshift-machine-config-operator/machine-config-server-s7wqk" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.410994 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ee74fec8-5dc9-4e29-8a9c-1ac61fc34a49-signing-key\") pod \"service-ca-9c57cc56f-5pgcp\" (UID: \"ee74fec8-5dc9-4e29-8a9c-1ac61fc34a49\") " pod="openshift-service-ca/service-ca-9c57cc56f-5pgcp" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.411015 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ee74fec8-5dc9-4e29-8a9c-1ac61fc34a49-signing-cabundle\") pod \"service-ca-9c57cc56f-5pgcp\" (UID: \"ee74fec8-5dc9-4e29-8a9c-1ac61fc34a49\") " pod="openshift-service-ca/service-ca-9c57cc56f-5pgcp" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.411030 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg64m\" (UniqueName: \"kubernetes.io/projected/f7aedfc8-3056-44a8-9389-f098e2bc39ec-kube-api-access-pg64m\") pod \"ingress-canary-n7dd8\" (UID: \"f7aedfc8-3056-44a8-9389-f098e2bc39ec\") " pod="openshift-ingress-canary/ingress-canary-n7dd8" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.411048 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df20cd9b-2dce-4295-84fb-d62f277eea32-serving-cert\") pod \"service-ca-operator-777779d784-v5c96\" (UID: \"df20cd9b-2dce-4295-84fb-d62f277eea32\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v5c96" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.411078 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1411e866-be2e-40c6-af1a-7a8ddca854d3-metrics-tls\") pod \"dns-default-gv2d7\" (UID: \"1411e866-be2e-40c6-af1a-7a8ddca854d3\") " pod="openshift-dns/dns-default-gv2d7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.411107 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f5ecc4da-0cfa-4632-8478-e48a3c8aba36-registration-dir\") pod \"csi-hostpathplugin-7r9r7\" (UID: \"f5ecc4da-0cfa-4632-8478-e48a3c8aba36\") " pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.411123 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5snm\" (UniqueName: \"kubernetes.io/projected/df20cd9b-2dce-4295-84fb-d62f277eea32-kube-api-access-q5snm\") pod \"service-ca-operator-777779d784-v5c96\" (UID: \"df20cd9b-2dce-4295-84fb-d62f277eea32\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v5c96" Feb 27 18:47:55 crc kubenswrapper[4981]: E0227 18:47:55.411274 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:55.911260113 +0000 UTC m=+175.390041273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.411570 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f5ecc4da-0cfa-4632-8478-e48a3c8aba36-plugins-dir\") pod \"csi-hostpathplugin-7r9r7\" (UID: \"f5ecc4da-0cfa-4632-8478-e48a3c8aba36\") " pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.411672 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f5ecc4da-0cfa-4632-8478-e48a3c8aba36-mountpoint-dir\") pod \"csi-hostpathplugin-7r9r7\" (UID: \"f5ecc4da-0cfa-4632-8478-e48a3c8aba36\") " pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.411769 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f5ecc4da-0cfa-4632-8478-e48a3c8aba36-csi-data-dir\") pod \"csi-hostpathplugin-7r9r7\" (UID: \"f5ecc4da-0cfa-4632-8478-e48a3c8aba36\") " pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.412819 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df20cd9b-2dce-4295-84fb-d62f277eea32-config\") pod \"service-ca-operator-777779d784-v5c96\" (UID: \"df20cd9b-2dce-4295-84fb-d62f277eea32\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v5c96" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.413282 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f5ecc4da-0cfa-4632-8478-e48a3c8aba36-registration-dir\") pod \"csi-hostpathplugin-7r9r7\" (UID: \"f5ecc4da-0cfa-4632-8478-e48a3c8aba36\") " pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.414492 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ee74fec8-5dc9-4e29-8a9c-1ac61fc34a49-signing-key\") pod \"service-ca-9c57cc56f-5pgcp\" (UID: \"ee74fec8-5dc9-4e29-8a9c-1ac61fc34a49\") " pod="openshift-service-ca/service-ca-9c57cc56f-5pgcp" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.414872 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8977aa66-1107-4ab9-a774-0e6cebffd78f-certs\") pod \"machine-config-server-s7wqk\" (UID: \"8977aa66-1107-4ab9-a774-0e6cebffd78f\") " pod="openshift-machine-config-operator/machine-config-server-s7wqk" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.414974 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ee74fec8-5dc9-4e29-8a9c-1ac61fc34a49-signing-cabundle\") pod \"service-ca-9c57cc56f-5pgcp\" (UID: \"ee74fec8-5dc9-4e29-8a9c-1ac61fc34a49\") " pod="openshift-service-ca/service-ca-9c57cc56f-5pgcp" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.416169 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/310c5674-0397-4720-b996-d50d28ebc783-webhook-cert\") pod \"packageserver-d55dfcdfc-plxr2\" (UID: \"310c5674-0397-4720-b996-d50d28ebc783\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.416332 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f5ecc4da-0cfa-4632-8478-e48a3c8aba36-socket-dir\") pod \"csi-hostpathplugin-7r9r7\" (UID: \"f5ecc4da-0cfa-4632-8478-e48a3c8aba36\") " pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.416853 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df20cd9b-2dce-4295-84fb-d62f277eea32-serving-cert\") pod \"service-ca-operator-777779d784-v5c96\" (UID: \"df20cd9b-2dce-4295-84fb-d62f277eea32\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v5c96" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.416895 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1411e866-be2e-40c6-af1a-7a8ddca854d3-metrics-tls\") pod \"dns-default-gv2d7\" (UID: \"1411e866-be2e-40c6-af1a-7a8ddca854d3\") " pod="openshift-dns/dns-default-gv2d7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.418221 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1411e866-be2e-40c6-af1a-7a8ddca854d3-config-volume\") pod \"dns-default-gv2d7\" (UID: \"1411e866-be2e-40c6-af1a-7a8ddca854d3\") " pod="openshift-dns/dns-default-gv2d7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.418728 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/310c5674-0397-4720-b996-d50d28ebc783-apiservice-cert\") pod \"packageserver-d55dfcdfc-plxr2\" (UID: \"310c5674-0397-4720-b996-d50d28ebc783\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.420609 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8977aa66-1107-4ab9-a774-0e6cebffd78f-node-bootstrap-token\") pod \"machine-config-server-s7wqk\" (UID: \"8977aa66-1107-4ab9-a774-0e6cebffd78f\") " pod="openshift-machine-config-operator/machine-config-server-s7wqk" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.423177 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f7aedfc8-3056-44a8-9389-f098e2bc39ec-cert\") pod \"ingress-canary-n7dd8\" (UID: \"f7aedfc8-3056-44a8-9389-f098e2bc39ec\") " pod="openshift-ingress-canary/ingress-canary-n7dd8" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.424323 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/310c5674-0397-4720-b996-d50d28ebc783-tmpfs\") pod \"packageserver-d55dfcdfc-plxr2\" (UID: \"310c5674-0397-4720-b996-d50d28ebc783\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.433682 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f7e7ee0e-64a2-45c2-9ed6-82998fd8687d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fqktd\" (UID: \"f7e7ee0e-64a2-45c2-9ed6-82998fd8687d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqktd" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.472128 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5snm\" (UniqueName: \"kubernetes.io/projected/df20cd9b-2dce-4295-84fb-d62f277eea32-kube-api-access-q5snm\") pod \"service-ca-operator-777779d784-v5c96\" (UID: \"df20cd9b-2dce-4295-84fb-d62f277eea32\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-v5c96" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.487785 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlq82\" (UniqueName: \"kubernetes.io/projected/1411e866-be2e-40c6-af1a-7a8ddca854d3-kube-api-access-mlq82\") pod \"dns-default-gv2d7\" (UID: \"1411e866-be2e-40c6-af1a-7a8ddca854d3\") " pod="openshift-dns/dns-default-gv2d7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.507615 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4r4t\" (UniqueName: \"kubernetes.io/projected/f5ecc4da-0cfa-4632-8478-e48a3c8aba36-kube-api-access-f4r4t\") pod \"csi-hostpathplugin-7r9r7\" (UID: \"f5ecc4da-0cfa-4632-8478-e48a3c8aba36\") " pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.512965 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: E0227 18:47:55.513319 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:56.013307596 +0000 UTC m=+175.492088756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.522979 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.547437 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxnpx\" (UniqueName: \"kubernetes.io/projected/f7e7ee0e-64a2-45c2-9ed6-82998fd8687d-kube-api-access-cxnpx\") pod \"package-server-manager-789f6589d5-fqktd\" (UID: \"f7e7ee0e-64a2-45c2-9ed6-82998fd8687d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqktd" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.549772 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6z24\" (UniqueName: \"kubernetes.io/projected/310c5674-0397-4720-b996-d50d28ebc783-kube-api-access-r6z24\") pod \"packageserver-d55dfcdfc-plxr2\" (UID: \"310c5674-0397-4720-b996-d50d28ebc783\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.566938 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4zf2\" (UniqueName: \"kubernetes.io/projected/ee74fec8-5dc9-4e29-8a9c-1ac61fc34a49-kube-api-access-z4zf2\") pod \"service-ca-9c57cc56f-5pgcp\" (UID: \"ee74fec8-5dc9-4e29-8a9c-1ac61fc34a49\") " pod="openshift-service-ca/service-ca-9c57cc56f-5pgcp" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.583854 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnwdt\" (UniqueName: \"kubernetes.io/projected/8977aa66-1107-4ab9-a774-0e6cebffd78f-kube-api-access-bnwdt\") pod \"machine-config-server-s7wqk\" (UID: \"8977aa66-1107-4ab9-a774-0e6cebffd78f\") " pod="openshift-machine-config-operator/machine-config-server-s7wqk" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.599779 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l8kzl" event={"ID":"23834dc7-7dcc-4a63-b666-3c1829da4cf4","Type":"ContainerStarted","Data":"97af6f073121d5e9fad80a73d6c463fb51d7f7f8518f0ef5cc5bdc62f6dd908f"} Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.601431 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9h9p4" event={"ID":"c616c83f-0616-4a1d-b2ac-69cdc88eef70","Type":"ContainerStarted","Data":"021c3a188ff79463eee10f8a6ad9ed2b374590cec2c1914af7883ed189a58734"} Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.604465 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg64m\" (UniqueName: \"kubernetes.io/projected/f7aedfc8-3056-44a8-9389-f098e2bc39ec-kube-api-access-pg64m\") pod \"ingress-canary-n7dd8\" (UID: \"f7aedfc8-3056-44a8-9389-f098e2bc39ec\") " pod="openshift-ingress-canary/ingress-canary-n7dd8" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.606377 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" event={"ID":"10e31a3f-eb88-4c8b-93e7-e251f762d29e","Type":"ContainerStarted","Data":"0df29c09071a7b4da97a71a4491419a04da0e713c1a2aa146d54add6444af2f3"} Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.606434 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" event={"ID":"10e31a3f-eb88-4c8b-93e7-e251f762d29e","Type":"ContainerStarted","Data":"8a15199296ee77b1150176cf676274ec35958768c4c01a43d4aabf77f7f659ff"} Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.606451 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.607827 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jdhvt" event={"ID":"1107c99b-98a7-4103-9e6c-dde234daacaf","Type":"ContainerStarted","Data":"bec97a85d0730d2012333f62d811d1ad1449cab2b723c95127abf8f20d259c60"} Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.608890 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" event={"ID":"86f8ab04-83b9-497b-a8b4-cde27e61d568","Type":"ContainerStarted","Data":"5b8816fd66a52dddc23a29e2084d8bff64e7adb1fc379b4aa62f8a421597ca1f"} Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.610861 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-swwkp" event={"ID":"e9dac184-b115-4584-8344-d4cfea132d7d","Type":"ContainerStarted","Data":"6a128f4ad586338bc95f216f0696734a844f97bc0ec423cf6d724db2495abb7a"} Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.612319 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-r858c" event={"ID":"2d1ccfd2-99ab-4caf-82d3-6b58656de39f","Type":"ContainerStarted","Data":"48f52a79c42f6e79cf579f2d0cca81766284806c1f76c054d357b86c2a3ba265"} Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.612343 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-r858c" event={"ID":"2d1ccfd2-99ab-4caf-82d3-6b58656de39f","Type":"ContainerStarted","Data":"d67d8a9cdc197fdd7cf5d02c2bde709d8defdac9f4de36c8874459273a8b7300"} Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.614034 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qb2m5" event={"ID":"79fd2558-1055-4895-9db5-58da8eb8aacf","Type":"ContainerStarted","Data":"55dea65954a22903b616a72169c9541ac56e5d94104bfd342a4776023f01e103"} Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.614082 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qb2m5" event={"ID":"79fd2558-1055-4895-9db5-58da8eb8aacf","Type":"ContainerStarted","Data":"941eaf23998325f266f98b28746ba2a0ca81f06109659dbcd249778fa7dc9606"} Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.614124 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:55 crc kubenswrapper[4981]: E0227 18:47:55.614195 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:56.114179744 +0000 UTC m=+175.592960904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.614419 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: E0227 18:47:55.614671 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:56.114664469 +0000 UTC m=+175.593445629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.617143 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.619100 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h885x" event={"ID":"3bedbc35-5c52-4c25-a77b-dcbc4a5dbc21","Type":"ContainerStarted","Data":"b5221c5448a2cce93e25eba47d8d65030fc4132db0865223312100267055be77"} Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.619118 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h885x" event={"ID":"3bedbc35-5c52-4c25-a77b-dcbc4a5dbc21","Type":"ContainerStarted","Data":"9f567068321f0b87a2378c02e637ebba9f887a89f4fd55dd8767805b4c052a3a"} Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.621957 4981 generic.go:334] "Generic (PLEG): container finished" podID="bbbe9d1b-2a55-4b34-b452-32f51eef3278" containerID="5ace174e0da61e3f4d7b0437bd72ce12de27320c71af31e5f1190be2e15b3835" exitCode=0 Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.621994 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h8fzp" event={"ID":"bbbe9d1b-2a55-4b34-b452-32f51eef3278","Type":"ContainerDied","Data":"5ace174e0da61e3f4d7b0437bd72ce12de27320c71af31e5f1190be2e15b3835"} Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.622008 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h8fzp" event={"ID":"bbbe9d1b-2a55-4b34-b452-32f51eef3278","Type":"ContainerStarted","Data":"bfb4c622c2608c1375b38e91e6e37413e93718aa705275926f6422e8d3410f15"} Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.624626 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5cs45" event={"ID":"f25a5ec8-b28e-4b5c-a4f0-c6ad09e63b3d","Type":"ContainerStarted","Data":"8933672611ae7c674d6f40e0f2a3ce25d87fb9e4bb5f45fe10ab607eb9ab8dbd"} Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.624721 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqktd" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.626124 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" event={"ID":"46dc2075-e24c-46cf-9885-3de46322461d","Type":"ContainerStarted","Data":"af5eb9169ae7b5dfb37e1e6d808827f011e52f2aaac8389e3b8258ec1fa51b89"} Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.627591 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" event={"ID":"89fbbf29-4a7d-40d3-ad12-9e1111396e8d","Type":"ContainerStarted","Data":"d8f2b86c6ee8c6c5f717e9689e29bbe68b5de87293db2a266267c873d0ceab26"} Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.629205 4981 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-ns4jv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.629261 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" podUID="10e31a3f-eb88-4c8b-93e7-e251f762d29e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.631544 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5pgcp" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.636936 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gv2d7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.651711 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v5c96" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.655799 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xmtml" event={"ID":"8071e59a-5e70-4dac-b36c-e17dd74f75b0","Type":"ContainerStarted","Data":"659290cf42a8b8423fc0ba8e88adb170493db1c191ec1bf7a52049b0e3357bf5"} Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.655831 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xmtml" event={"ID":"8071e59a-5e70-4dac-b36c-e17dd74f75b0","Type":"ContainerStarted","Data":"a9233d3c89f1b928b56dd8a07aeff6f9031c15fdb9872497cd5ea7e9cef6ebae"} Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.659346 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-s7wqk" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.681024 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.689438 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-n7dd8" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.716313 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:55 crc kubenswrapper[4981]: E0227 18:47:55.719101 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:56.219081944 +0000 UTC m=+175.697863104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.743916 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-t9grl"] Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.754108 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw"] Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.755611 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-dllzn"] Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.757069 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7hs2"] Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.781294 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2"] Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.817459 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:55 crc kubenswrapper[4981]: E0227 18:47:55.817780 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:56.317770576 +0000 UTC m=+175.796551736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:55 crc kubenswrapper[4981]: W0227 18:47:55.889490 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod124fe9f8_0789_4ae2_aa50_6eb0c57f60ea.slice/crio-291c0bd07507e20cc41d357f9174a62410e135697c1c09ac8ee311ce37b434aa WatchSource:0}: Error finding container 291c0bd07507e20cc41d357f9174a62410e135697c1c09ac8ee311ce37b434aa: Status 404 returned error can't find the container with id 291c0bd07507e20cc41d357f9174a62410e135697c1c09ac8ee311ce37b434aa Feb 27 18:47:55 crc kubenswrapper[4981]: W0227 18:47:55.905575 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7d035ce_f026_4668_9fca_c344f1fe60e3.slice/crio-bb0f23f9b6b95712fecdac102863e27bfeb0b1653c9b43001a872ee7f4c8ff34 WatchSource:0}: Error finding container bb0f23f9b6b95712fecdac102863e27bfeb0b1653c9b43001a872ee7f4c8ff34: Status 404 returned error can't find the container with id bb0f23f9b6b95712fecdac102863e27bfeb0b1653c9b43001a872ee7f4c8ff34 Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.917968 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:55 crc kubenswrapper[4981]: E0227 18:47:55.918849 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:56.41882498 +0000 UTC m=+175.897606140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.919902 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g8dqb"] Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.932923 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.937164 4981 patch_prober.go:28] interesting pod/router-default-5444994796-r858c container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.937212 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-r858c" podUID="2d1ccfd2-99ab-4caf-82d3-6b58656de39f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.942146 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bkfrt"] Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.943703 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hx8gq"] Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.948708 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-922ln"] Feb 27 18:47:55 crc kubenswrapper[4981]: I0227 18:47:55.984778 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5f96t"] Feb 27 18:47:55 crc kubenswrapper[4981]: W0227 18:47:55.985058 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9068b6d6_8e5c_4fbe_a92e_73d6f1dd54bc.slice/crio-a43fde46f86f91f7dca00bdbd545437e25d1a45e0d2cff1a8fecbf5f32dc581a WatchSource:0}: Error finding container a43fde46f86f91f7dca00bdbd545437e25d1a45e0d2cff1a8fecbf5f32dc581a: Status 404 returned error can't find the container with id a43fde46f86f91f7dca00bdbd545437e25d1a45e0d2cff1a8fecbf5f32dc581a Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.032567 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:56 crc kubenswrapper[4981]: E0227 18:47:56.032824 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:56.532814915 +0000 UTC m=+176.011596075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.067224 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p"] Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.067275 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xnkzj"] Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.071128 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p"] Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.088144 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sfsnw"] Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.097941 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf"] Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.101506 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7mfkz"] Feb 27 18:47:56 crc kubenswrapper[4981]: W0227 18:47:56.117790 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda086194a_cb30_43f5_8cf2_f67d0f85a36d.slice/crio-f24d4f66f09e9c74434f8894046a6e8463347cc723ed568c884b80917f1ae893 WatchSource:0}: Error finding container f24d4f66f09e9c74434f8894046a6e8463347cc723ed568c884b80917f1ae893: Status 404 returned error can't find the container with id f24d4f66f09e9c74434f8894046a6e8463347cc723ed568c884b80917f1ae893 Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.137653 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:56 crc kubenswrapper[4981]: E0227 18:47:56.138074 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:56.638046584 +0000 UTC m=+176.116827744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.152429 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2vmct"] Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.245371 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:56 crc kubenswrapper[4981]: E0227 18:47:56.245703 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:56.745691838 +0000 UTC m=+176.224472988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.298425 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gv2d7"] Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.299959 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqktd"] Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.344577 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pcmgp"] Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.346066 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2"] Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.347434 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:56 crc kubenswrapper[4981]: E0227 18:47:56.348443 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:56.848412121 +0000 UTC m=+176.327193281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.348542 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5pgcp"] Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.349483 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-c4m77"] Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.355490 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7r9r7"] Feb 27 18:47:56 crc kubenswrapper[4981]: W0227 18:47:56.405705 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee74fec8_5dc9_4e29_8a9c_1ac61fc34a49.slice/crio-12b31cf37863f773a2a8daaa46a15d05b53166d62f5784d72ba9764d596e6343 WatchSource:0}: Error finding container 12b31cf37863f773a2a8daaa46a15d05b53166d62f5784d72ba9764d596e6343: Status 404 returned error can't find the container with id 12b31cf37863f773a2a8daaa46a15d05b53166d62f5784d72ba9764d596e6343 Feb 27 18:47:56 crc kubenswrapper[4981]: W0227 18:47:56.417595 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3667fc6c_078a_4be4_95ff_7174c74faf2c.slice/crio-1909512406b10dab9b8e6efae2635f0bd8defcc7eb3c237d11fdfe3bd7cfbd50 WatchSource:0}: Error finding container 1909512406b10dab9b8e6efae2635f0bd8defcc7eb3c237d11fdfe3bd7cfbd50: Status 404 returned error can't find the container with id 1909512406b10dab9b8e6efae2635f0bd8defcc7eb3c237d11fdfe3bd7cfbd50 Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.452037 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:56 crc kubenswrapper[4981]: E0227 18:47:56.455860 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:56.955833058 +0000 UTC m=+176.434614228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.515252 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-v5c96"] Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.553569 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:56 crc kubenswrapper[4981]: E0227 18:47:56.553843 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:57.053827778 +0000 UTC m=+176.532608938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.582854 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-r858c" podStartSLOduration=112.582834797 podStartE2EDuration="1m52.582834797s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:56.582297302 +0000 UTC m=+176.061078462" watchObservedRunningTime="2026-02-27 18:47:56.582834797 +0000 UTC m=+176.061615957" Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.618123 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-n7dd8"] Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.656471 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:56 crc kubenswrapper[4981]: E0227 18:47:56.656797 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:57.156786189 +0000 UTC m=+176.635567349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.665653 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw" event={"ID":"b3b236af-9b1b-45f2-8780-ea9d4726bd0f","Type":"ContainerStarted","Data":"e13272e2ed16d8dbf7909f014df3ffca14823d4fdb6f8c12ea30d3fff0863112"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.665695 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw" event={"ID":"b3b236af-9b1b-45f2-8780-ea9d4726bd0f","Type":"ContainerStarted","Data":"2794b5d486d9ad682ff237afd5b85f64562f397e83ea7c4084fd3bfd75d824de"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.666707 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw" Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.668272 4981 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-85vdw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.668329 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw" podUID="b3b236af-9b1b-45f2-8780-ea9d4726bd0f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.668950 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t9grl" event={"ID":"32d9179a-38c6-482f-95be-c94b48b83856","Type":"ContainerStarted","Data":"74704a4d30276afc0224c3f1914df25b5a4e58114b9242344318d36c2ad3727d"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.668977 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t9grl" event={"ID":"32d9179a-38c6-482f-95be-c94b48b83856","Type":"ContainerStarted","Data":"df0802bedeedc14d225b26d2b8068bcac264ec1c3d02a8d8e1f48a2e2b58622a"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.669417 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-t9grl" Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.670421 4981 patch_prober.go:28] interesting pod/downloads-7954f5f757-t9grl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.670474 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t9grl" podUID="32d9179a-38c6-482f-95be-c94b48b83856" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.672984 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-c4m77" event={"ID":"3667fc6c-078a-4be4-95ff-7174c74faf2c","Type":"ContainerStarted","Data":"1909512406b10dab9b8e6efae2635f0bd8defcc7eb3c237d11fdfe3bd7cfbd50"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.683521 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-h885x" podStartSLOduration=113.683504059 podStartE2EDuration="1m53.683504059s" podCreationTimestamp="2026-02-27 18:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:56.677789085 +0000 UTC m=+176.156570245" watchObservedRunningTime="2026-02-27 18:47:56.683504059 +0000 UTC m=+176.162285219" Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.685587 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jdhvt" event={"ID":"1107c99b-98a7-4103-9e6c-dde234daacaf","Type":"ContainerStarted","Data":"b706f9c7d30b2bf48ae3250362d0ff8dbc08152c7148d15a1c91c4086c575bdd"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.685654 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jdhvt" event={"ID":"1107c99b-98a7-4103-9e6c-dde234daacaf","Type":"ContainerStarted","Data":"f68c7651a7ce2684d3552361396c53cd7ebafb74c083f9de43110f5881c55231"} Feb 27 18:47:56 crc kubenswrapper[4981]: W0227 18:47:56.692113 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7aedfc8_3056_44a8_9389_f098e2bc39ec.slice/crio-48874cd8dc747c37d8f97435823d2d428e15940dd6e7bfad7b5b3feb924010a0 WatchSource:0}: Error finding container 48874cd8dc747c37d8f97435823d2d428e15940dd6e7bfad7b5b3feb924010a0: Status 404 returned error can't find the container with id 48874cd8dc747c37d8f97435823d2d428e15940dd6e7bfad7b5b3feb924010a0 Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.692537 4981 generic.go:334] "Generic (PLEG): container finished" podID="46dc2075-e24c-46cf-9885-3de46322461d" containerID="80d629608d8a5661d86fb2bd3e71a6594639633061e5bf1ba72e9482a0e521e8" exitCode=0 Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.692617 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" event={"ID":"46dc2075-e24c-46cf-9885-3de46322461d","Type":"ContainerDied","Data":"80d629608d8a5661d86fb2bd3e71a6594639633061e5bf1ba72e9482a0e521e8"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.725997 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v5c96" event={"ID":"df20cd9b-2dce-4295-84fb-d62f277eea32","Type":"ContainerStarted","Data":"4844a4d27c0e6995b9f89788ac377120484faee02128f570f52b9a23e75c9bfe"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.745357 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5pgcp" event={"ID":"ee74fec8-5dc9-4e29-8a9c-1ac61fc34a49","Type":"ContainerStarted","Data":"12b31cf37863f773a2a8daaa46a15d05b53166d62f5784d72ba9764d596e6343"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.757732 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:56 crc kubenswrapper[4981]: E0227 18:47:56.758679 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:57.258665478 +0000 UTC m=+176.737446638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.775557 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p" event={"ID":"01036322-73f2-4c61-b59d-ff9eff5d4b5f","Type":"ContainerStarted","Data":"3031b1bdfd050b8ee279a7682336ccd612d9dc6004a03e8d67391fd12b8f31af"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.797506 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sfsnw" event={"ID":"cd14f9a5-6dec-4df9-99a3-94c248ae4f60","Type":"ContainerStarted","Data":"ab6dfe7aa4253737f6db5fcb212a79164d64f9875beed46769812f39623d9162"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.805643 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9h9p4" event={"ID":"c616c83f-0616-4a1d-b2ac-69cdc88eef70","Type":"ContainerStarted","Data":"9a0f9f8e87f4dfa3e3619c2224083afe340d8644f302c07c0dc4a9825f9c34e3"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.809169 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-7mfkz" event={"ID":"981ad6df-4f80-446c-83a8-cf8e4bc7436d","Type":"ContainerStarted","Data":"c96d5cd28e02326c94c9b78c798b1ca4914215924e5b44a9239926742460d928"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.814850 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5cs45" event={"ID":"f25a5ec8-b28e-4b5c-a4f0-c6ad09e63b3d","Type":"ContainerStarted","Data":"52f2fd0160ae6589b2b6b6438ed870f15909db6554413ee397be7e87e9952efb"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.819140 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-s7wqk" event={"ID":"8977aa66-1107-4ab9-a774-0e6cebffd78f","Type":"ContainerStarted","Data":"dc2b6422123f9c9bc60fae65d70c48d74b274f4facbf9f1d13586cf92fb17768"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.819201 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-s7wqk" event={"ID":"8977aa66-1107-4ab9-a774-0e6cebffd78f","Type":"ContainerStarted","Data":"e812b4508de6eedb6f18feca638428f0d3bb0fea24875fafe73c4467b98b3cec"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.826805 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-swwkp" event={"ID":"e9dac184-b115-4584-8344-d4cfea132d7d","Type":"ContainerStarted","Data":"5eae79d08e7dbdd51941387fcaadc19541c924d719c615f06641545665faff2b"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.835117 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" event={"ID":"ef632318-2ac5-418d-b9d4-dcd616b4d768","Type":"ContainerStarted","Data":"a448c9f845ec87d2a8605fe9b289d1aae554e87faa44a90e5c4eacda0155d07d"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.843555 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" event={"ID":"f5ecc4da-0cfa-4632-8478-e48a3c8aba36","Type":"ContainerStarted","Data":"296ee30c7c15dfe55361b4d9b395b78649dadce987d04dc5722c9d91b795c284"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.856544 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf" event={"ID":"2eb44459-26b2-48d3-931c-e718dda5133b","Type":"ContainerStarted","Data":"2e60679f09a47f7793dee9cbad00f7b73f71c22cb57e7e78ee9cdb3c1ef140f0"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.860980 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:56 crc kubenswrapper[4981]: E0227 18:47:56.861482 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:57.361470314 +0000 UTC m=+176.840251474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.865154 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2vmct" event={"ID":"37a55d62-b540-4883-9548-3c0da02e8824","Type":"ContainerStarted","Data":"7a735dd95faa0a0155066c4e4491a37159a7ec256ef8f466aff41dd8f677b010"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.868791 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p" event={"ID":"05d54587-469b-407f-aa60-f0fdefb9dce7","Type":"ContainerStarted","Data":"cd31ff725127dec592e1043aea356e27b6ee4faf32093312a03e05ea37b95036"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.872759 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" event={"ID":"89fbbf29-4a7d-40d3-ad12-9e1111396e8d","Type":"ContainerStarted","Data":"6c70de6feb244f9cd8ccc039ed9a2e77eb002500df9fabcd1b45d5c1baba4187"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.873422 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.875113 4981 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-rn6m5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.875150 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" podUID="89fbbf29-4a7d-40d3-ad12-9e1111396e8d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.875367 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqktd" event={"ID":"f7e7ee0e-64a2-45c2-9ed6-82998fd8687d","Type":"ContainerStarted","Data":"680615457e71a1cca28671344c1f2ba80bf5839b97da5a370cd36973bb9cb37d"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.876160 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bkfrt" event={"ID":"d5b69559-dbbb-451e-8a89-0d8c61a363f3","Type":"ContainerStarted","Data":"e843a5b775b77298a14e5528cdce19873afefb349fe67ee936fefdc7d72e8446"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.876184 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bkfrt" event={"ID":"d5b69559-dbbb-451e-8a89-0d8c61a363f3","Type":"ContainerStarted","Data":"6abe729f0d8c774795d53bf8b5def597c90d36b1c40f1cedf8ad2901c416bbec"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.879530 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gv2d7" event={"ID":"1411e866-be2e-40c6-af1a-7a8ddca854d3","Type":"ContainerStarted","Data":"62abe62f30c04c5c83bcd6e667aff677af986fa3ef0c33eea302390e9ca6a2a3"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.887882 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-922ln" event={"ID":"9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc","Type":"ContainerStarted","Data":"9f50353e580b08af29630ae17af7fdc339e3bd3bb6ce4180478d249a120193db"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.887959 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-922ln" event={"ID":"9068b6d6-8e5c-4fbe-a92e-73d6f1dd54bc","Type":"ContainerStarted","Data":"a43fde46f86f91f7dca00bdbd545437e25d1a45e0d2cff1a8fecbf5f32dc581a"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.931443 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dllzn" event={"ID":"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea","Type":"ContainerStarted","Data":"ba060d7604cc473c5f08b17e16ee67f5943431e977b2cefc1dc5b37a5dca2f27"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.931495 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dllzn" event={"ID":"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea","Type":"ContainerStarted","Data":"291c0bd07507e20cc41d357f9174a62410e135697c1c09ac8ee311ce37b434aa"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.937009 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" event={"ID":"86f8ab04-83b9-497b-a8b4-cde27e61d568","Type":"ContainerStarted","Data":"a60cec4e64bc7442546f52ffe51a9d58559eda79dadaaf65106b275d95021031"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.937791 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.937889 4981 patch_prober.go:28] interesting pod/router-default-5444994796-r858c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 18:47:56 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Feb 27 18:47:56 crc kubenswrapper[4981]: [+]process-running ok Feb 27 18:47:56 crc kubenswrapper[4981]: healthz check failed Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.937918 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-r858c" podUID="2d1ccfd2-99ab-4caf-82d3-6b58656de39f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.939314 4981 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-v4vk8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.939376 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" podUID="86f8ab04-83b9-497b-a8b4-cde27e61d568" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.947282 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xmtml" event={"ID":"8071e59a-5e70-4dac-b36c-e17dd74f75b0","Type":"ContainerStarted","Data":"49181f299da99a85947ba645a80e72fe5302883b38e05c491564091e4967d836"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.951619 4981 generic.go:334] "Generic (PLEG): container finished" podID="deda0ab1-f81e-4898-b4cb-5627947b5ed4" containerID="0ac82c5f99a9109fc3d62f2d6be718fe50b86f668c9fa326f4f9ef83ff785081" exitCode=0 Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.951670 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" event={"ID":"deda0ab1-f81e-4898-b4cb-5627947b5ed4","Type":"ContainerDied","Data":"0ac82c5f99a9109fc3d62f2d6be718fe50b86f668c9fa326f4f9ef83ff785081"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.951690 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" event={"ID":"deda0ab1-f81e-4898-b4cb-5627947b5ed4","Type":"ContainerStarted","Data":"78cebaffaa793f2a98dcabb4c1649bee67ed9d8232687afc3c7334af28ce0af5"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.955904 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2" event={"ID":"b7d035ce-f026-4668-9fca-c344f1fe60e3","Type":"ContainerStarted","Data":"683f3f2f0021bfd989263a07a7591bc65893d2e805b7474481414b3ad12cfc72"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.955946 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2" event={"ID":"b7d035ce-f026-4668-9fca-c344f1fe60e3","Type":"ContainerStarted","Data":"bb0f23f9b6b95712fecdac102863e27bfeb0b1653c9b43001a872ee7f4c8ff34"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.966836 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:56 crc kubenswrapper[4981]: E0227 18:47:56.968168 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:57.468153798 +0000 UTC m=+176.946934948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.968956 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" podStartSLOduration=112.968943231 podStartE2EDuration="1m52.968943231s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:56.957664809 +0000 UTC m=+176.436445969" watchObservedRunningTime="2026-02-27 18:47:56.968943231 +0000 UTC m=+176.447724451" Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.970497 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2" event={"ID":"310c5674-0397-4720-b996-d50d28ebc783","Type":"ContainerStarted","Data":"c48427fcafefc8cb14d7f7c83698a0ed4139cd60be71fc41bc9c133fe338ba8d"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.974208 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5f96t" event={"ID":"41343732-7cc4-4fe6-9435-8a6332ba522c","Type":"ContainerStarted","Data":"d3d192f674be315f512d10157979cc91dbc5fd5ce76b0f44bef9dc0faffd844b"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.974242 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5f96t" event={"ID":"41343732-7cc4-4fe6-9435-8a6332ba522c","Type":"ContainerStarted","Data":"a980ccf0cae43bb403304e00ec59e623e9f246d1776bc1bb07559a5e1d60be91"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.977669 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l8kzl" event={"ID":"23834dc7-7dcc-4a63-b666-3c1829da4cf4","Type":"ContainerStarted","Data":"f539147627c105d26b274b27b457c195de0a18a4217ed92e037bcedc050901ab"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.977701 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l8kzl" event={"ID":"23834dc7-7dcc-4a63-b666-3c1829da4cf4","Type":"ContainerStarted","Data":"ab3f03d6cb6cdcdaefb440aa5bc1af11f0883b7997e7d5b63fd8e585c37ccc20"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.986919 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qb2m5" event={"ID":"79fd2558-1055-4895-9db5-58da8eb8aacf","Type":"ContainerStarted","Data":"aa9e3f55a5e577063f5acb410d3fd5666a10421456725b067fe9ba25055256f7"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.988852 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" event={"ID":"a086194a-cb30-43f5-8cf2-f67d0f85a36d","Type":"ContainerStarted","Data":"f24d4f66f09e9c74434f8894046a6e8463347cc723ed568c884b80917f1ae893"} Feb 27 18:47:56 crc kubenswrapper[4981]: I0227 18:47:56.991178 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xnkzj" event={"ID":"9904cc6d-0e10-4a0b-bd3b-c0e6592b4856","Type":"ContainerStarted","Data":"b9a4001f8527c3da950442549b67e3ac6a57a369f7da5459cc4c3144a4b68bbe"} Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.000757 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h8fzp" event={"ID":"bbbe9d1b-2a55-4b34-b452-32f51eef3278","Type":"ContainerStarted","Data":"01062c1893446b22bcaf0cf710c8380163209bd2aa0469694f9a90e70ae81f00"} Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.001304 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h8fzp" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.006914 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7hs2" event={"ID":"450816f5-eb2f-44e6-9b62-fd3f3b2fbf48","Type":"ContainerStarted","Data":"f302305beccd22427bffef14cbca7832f07458eb534ce4913069d3606f2ca79f"} Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.069518 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:57 crc kubenswrapper[4981]: E0227 18:47:57.070069 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:57.570034506 +0000 UTC m=+177.048815666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.127212 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-jdhvt" podStartSLOduration=113.127196979 podStartE2EDuration="1m53.127196979s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:57.126513518 +0000 UTC m=+176.605294678" watchObservedRunningTime="2026-02-27 18:47:57.127196979 +0000 UTC m=+176.605978129" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.127885 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.149139 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-922ln" podStartSLOduration=113.149122823 podStartE2EDuration="1m53.149122823s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:57.14767334 +0000 UTC m=+176.626454500" watchObservedRunningTime="2026-02-27 18:47:57.149122823 +0000 UTC m=+176.627903983" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.170290 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5f96t" podStartSLOduration=113.170270204 podStartE2EDuration="1m53.170270204s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:57.167647125 +0000 UTC m=+176.646428285" watchObservedRunningTime="2026-02-27 18:47:57.170270204 +0000 UTC m=+176.649051364" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.170800 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:57 crc kubenswrapper[4981]: E0227 18:47:57.176025 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:57.676009188 +0000 UTC m=+177.154790348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.204297 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5cs45" podStartSLOduration=113.204283325 podStartE2EDuration="1m53.204283325s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:57.202194812 +0000 UTC m=+176.680975962" watchObservedRunningTime="2026-02-27 18:47:57.204283325 +0000 UTC m=+176.683064485" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.274875 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:57 crc kubenswrapper[4981]: E0227 18:47:57.275311 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:57.775295488 +0000 UTC m=+177.254076648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.292527 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-s7wqk" podStartSLOduration=5.29251031 podStartE2EDuration="5.29251031s" podCreationTimestamp="2026-02-27 18:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:57.245729572 +0000 UTC m=+176.724510732" watchObservedRunningTime="2026-02-27 18:47:57.29251031 +0000 UTC m=+176.771291470" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.292789 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-t9grl" podStartSLOduration=114.292784939 podStartE2EDuration="1m54.292784939s" podCreationTimestamp="2026-02-27 18:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:57.280099574 +0000 UTC m=+176.758880734" watchObservedRunningTime="2026-02-27 18:47:57.292784939 +0000 UTC m=+176.771566099" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.314453 4981 ???:1] "http: TLS handshake error from 192.168.126.11:55328: no serving certificate available for the kubelet" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.317856 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-9h9p4" podStartSLOduration=114.317841178 podStartE2EDuration="1m54.317841178s" podCreationTimestamp="2026-02-27 18:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:57.315987491 +0000 UTC m=+176.794768641" watchObservedRunningTime="2026-02-27 18:47:57.317841178 +0000 UTC m=+176.796622338" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.360999 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h8fzp" podStartSLOduration=114.360980175 podStartE2EDuration="1m54.360980175s" podCreationTimestamp="2026-02-27 18:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:57.360880372 +0000 UTC m=+176.839661532" watchObservedRunningTime="2026-02-27 18:47:57.360980175 +0000 UTC m=+176.839761335" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.375786 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:57 crc kubenswrapper[4981]: E0227 18:47:57.376099 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:57.876081443 +0000 UTC m=+177.354862603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.414141 4981 ???:1] "http: TLS handshake error from 192.168.126.11:55332: no serving certificate available for the kubelet" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.446605 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bkfrt" podStartSLOduration=113.446592121 podStartE2EDuration="1m53.446592121s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:57.445608021 +0000 UTC m=+176.924389181" watchObservedRunningTime="2026-02-27 18:47:57.446592121 +0000 UTC m=+176.925373281" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.447017 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-xmtml" podStartSLOduration=114.447013204 podStartE2EDuration="1m54.447013204s" podCreationTimestamp="2026-02-27 18:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:57.404295599 +0000 UTC m=+176.883076839" watchObservedRunningTime="2026-02-27 18:47:57.447013204 +0000 UTC m=+176.925794354" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.477308 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:57 crc kubenswrapper[4981]: E0227 18:47:57.477625 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:57.977614552 +0000 UTC m=+177.456395712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.517821 4981 ???:1] "http: TLS handshake error from 192.168.126.11:55346: no serving certificate available for the kubelet" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.527498 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" podStartSLOduration=113.527469733 podStartE2EDuration="1m53.527469733s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:57.527199704 +0000 UTC m=+177.005980864" watchObservedRunningTime="2026-02-27 18:47:57.527469733 +0000 UTC m=+177.006250893" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.558312 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw" podStartSLOduration=113.558294377 podStartE2EDuration="1m53.558294377s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:57.557484872 +0000 UTC m=+177.036266032" watchObservedRunningTime="2026-02-27 18:47:57.558294377 +0000 UTC m=+177.037075537" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.578145 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:57 crc kubenswrapper[4981]: E0227 18:47:57.578500 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:58.078484059 +0000 UTC m=+177.557265219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.613441 4981 ???:1] "http: TLS handshake error from 192.168.126.11:55358: no serving certificate available for the kubelet" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.647359 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" podStartSLOduration=114.647341546 podStartE2EDuration="1m54.647341546s" podCreationTimestamp="2026-02-27 18:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:57.645751758 +0000 UTC m=+177.124532918" watchObservedRunningTime="2026-02-27 18:47:57.647341546 +0000 UTC m=+177.126122706" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.684602 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:57 crc kubenswrapper[4981]: E0227 18:47:57.684965 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:58.184949067 +0000 UTC m=+177.663730227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.687264 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qb2m5" podStartSLOduration=113.687250166 podStartE2EDuration="1m53.687250166s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:57.686471202 +0000 UTC m=+177.165252362" watchObservedRunningTime="2026-02-27 18:47:57.687250166 +0000 UTC m=+177.166031326" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.707994 4981 ???:1] "http: TLS handshake error from 192.168.126.11:55370: no serving certificate available for the kubelet" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.723434 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2" podStartSLOduration=114.723415712 podStartE2EDuration="1m54.723415712s" podCreationTimestamp="2026-02-27 18:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:57.72202563 +0000 UTC m=+177.200806790" watchObservedRunningTime="2026-02-27 18:47:57.723415712 +0000 UTC m=+177.202196872" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.765238 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-l8kzl" podStartSLOduration=113.76522577 podStartE2EDuration="1m53.76522577s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:57.764329672 +0000 UTC m=+177.243110832" watchObservedRunningTime="2026-02-27 18:47:57.76522577 +0000 UTC m=+177.244006930" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.785249 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:57 crc kubenswrapper[4981]: E0227 18:47:57.785580 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:58.285566417 +0000 UTC m=+177.764347577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.829562 4981 ???:1] "http: TLS handshake error from 192.168.126.11:55372: no serving certificate available for the kubelet" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.834418 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-swwkp" podStartSLOduration=113.834402516 podStartE2EDuration="1m53.834402516s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:57.833555531 +0000 UTC m=+177.312336691" watchObservedRunningTime="2026-02-27 18:47:57.834402516 +0000 UTC m=+177.313183686" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.854508 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-dllzn" podStartSLOduration=114.854493806 podStartE2EDuration="1m54.854493806s" podCreationTimestamp="2026-02-27 18:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:57.85198484 +0000 UTC m=+177.330766000" watchObservedRunningTime="2026-02-27 18:47:57.854493806 +0000 UTC m=+177.333274966" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.887328 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:57 crc kubenswrapper[4981]: E0227 18:47:57.887670 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:58.387657371 +0000 UTC m=+177.866438521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.939996 4981 patch_prober.go:28] interesting pod/router-default-5444994796-r858c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 18:47:57 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Feb 27 18:47:57 crc kubenswrapper[4981]: [+]process-running ok Feb 27 18:47:57 crc kubenswrapper[4981]: healthz check failed Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.940071 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-r858c" podUID="2d1ccfd2-99ab-4caf-82d3-6b58656de39f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.964022 4981 ???:1] "http: TLS handshake error from 192.168.126.11:55380: no serving certificate available for the kubelet" Feb 27 18:47:57 crc kubenswrapper[4981]: I0227 18:47:57.988822 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:57 crc kubenswrapper[4981]: E0227 18:47:57.989304 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:58.489289072 +0000 UTC m=+177.968070232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.040745 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-n7dd8" event={"ID":"f7aedfc8-3056-44a8-9389-f098e2bc39ec","Type":"ContainerStarted","Data":"68ce3185716a1e2416035fb86c0f608db93713668fe6230cc34f1f68f18467dd"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.040786 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-n7dd8" event={"ID":"f7aedfc8-3056-44a8-9389-f098e2bc39ec","Type":"ContainerStarted","Data":"48874cd8dc747c37d8f97435823d2d428e15940dd6e7bfad7b5b3feb924010a0"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.045041 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p" event={"ID":"01036322-73f2-4c61-b59d-ff9eff5d4b5f","Type":"ContainerStarted","Data":"ff4c636eb7239cac1e760a38e504eba53d3e0e45d0effa3a7128e3ec0b164e80"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.046083 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.052632 4981 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g2q7p container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.052669 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p" podUID="01036322-73f2-4c61-b59d-ff9eff5d4b5f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.053041 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sfsnw" event={"ID":"cd14f9a5-6dec-4df9-99a3-94c248ae4f60","Type":"ContainerStarted","Data":"cae80181bf20cc224bfc50144ad7aeba0adf30ac452d0502781b1baa70724046"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.056137 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-n7dd8" podStartSLOduration=6.056126328 podStartE2EDuration="6.056126328s" podCreationTimestamp="2026-02-27 18:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:58.05453139 +0000 UTC m=+177.533312550" watchObservedRunningTime="2026-02-27 18:47:58.056126328 +0000 UTC m=+177.534907478" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.063992 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf" event={"ID":"2eb44459-26b2-48d3-931c-e718dda5133b","Type":"ContainerStarted","Data":"86db7976e3d8b1ca929ea4d4a750b5834cc504ceecb6a07e03f9c79206c6d513"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.064031 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf" event={"ID":"2eb44459-26b2-48d3-931c-e718dda5133b","Type":"ContainerStarted","Data":"ee642a8c2c262e571a10f359cb5371b820a54b8950691aa5dea2853642416053"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.068329 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2vmct" event={"ID":"37a55d62-b540-4883-9548-3c0da02e8824","Type":"ContainerStarted","Data":"f2a99f05aad8684baecb47b3c14153e717347a99be586a07c1dc4be3394e860c"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.079398 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" event={"ID":"46dc2075-e24c-46cf-9885-3de46322461d","Type":"ContainerStarted","Data":"e4898643b139263693f01a01be7e66b975080e0f49192c2b40be33acd9bfdd41"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.082274 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p" podStartSLOduration=114.08226458 podStartE2EDuration="1m54.08226458s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:58.080089035 +0000 UTC m=+177.558870195" watchObservedRunningTime="2026-02-27 18:47:58.08226458 +0000 UTC m=+177.561045740" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.090410 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:58 crc kubenswrapper[4981]: E0227 18:47:58.090752 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:58.590741367 +0000 UTC m=+178.069522527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.091566 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p" event={"ID":"05d54587-469b-407f-aa60-f0fdefb9dce7","Type":"ContainerStarted","Data":"b4212909e28da49b9741f28a0447eaac1752b44b580a00d7863865b97bbef6fe"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.091600 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p" event={"ID":"05d54587-469b-407f-aa60-f0fdefb9dce7","Type":"ContainerStarted","Data":"eae6a32a30fb827067b24924020696323f8e8fb446d959a9366c8bcaee173127"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.102959 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gv2d7" event={"ID":"1411e866-be2e-40c6-af1a-7a8ddca854d3","Type":"ContainerStarted","Data":"b9114349f6bda66ce632ba5051bca0d55b06e1bf96dcd63274f036b3de2b1066"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.103010 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gv2d7" event={"ID":"1411e866-be2e-40c6-af1a-7a8ddca854d3","Type":"ContainerStarted","Data":"16280e49423f208cae457a70b2218cb07221a6e41cfb24f76879b44a870deecc"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.103538 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-gv2d7" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.103814 4981 ???:1] "http: TLS handshake error from 192.168.126.11:55390: no serving certificate available for the kubelet" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.115690 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-sfsnw" podStartSLOduration=114.115664633 podStartE2EDuration="1m54.115664633s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:58.113003782 +0000 UTC m=+177.591784942" watchObservedRunningTime="2026-02-27 18:47:58.115664633 +0000 UTC m=+177.594445793" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.116577 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" event={"ID":"deda0ab1-f81e-4898-b4cb-5627947b5ed4","Type":"ContainerStarted","Data":"ae9890a1757d6e8f05e2a00a415bc904c17106eb9cb19b35f33ae00cf4f765b2"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.131650 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2" event={"ID":"310c5674-0397-4720-b996-d50d28ebc783","Type":"ContainerStarted","Data":"6dcc321dc5b780128161e503b7f47eac5d348c3bfce67ac9e4d011c485744a06"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.132241 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.137540 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cpb6p" podStartSLOduration=114.137526075 podStartE2EDuration="1m54.137526075s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:58.135736741 +0000 UTC m=+177.614517891" watchObservedRunningTime="2026-02-27 18:47:58.137526075 +0000 UTC m=+177.616307225" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.141493 4981 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-plxr2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.141535 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2" podUID="310c5674-0397-4720-b996-d50d28ebc783" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.150187 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqktd" event={"ID":"f7e7ee0e-64a2-45c2-9ed6-82998fd8687d","Type":"ContainerStarted","Data":"b6ba4e020a31f5ba4437eb8180ca41220e3deaa251826c7352e3582e80afeae9"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.150223 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqktd" event={"ID":"f7e7ee0e-64a2-45c2-9ed6-82998fd8687d","Type":"ContainerStarted","Data":"ec57e6c0655b8f975168d0fe5a104721f657df92d1ea952ba162bacbb98cfddf"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.150237 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqktd" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.174850 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5pgcp" event={"ID":"ee74fec8-5dc9-4e29-8a9c-1ac61fc34a49","Type":"ContainerStarted","Data":"205cd34451e7e2c1a69ffadb4359c8f002cb19e2b3d86beabf885b6f0d8468c3"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.191521 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.191901 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v5c96" event={"ID":"df20cd9b-2dce-4295-84fb-d62f277eea32","Type":"ContainerStarted","Data":"881d4c2a51465c9232436e89be30a26e726f71b87f030cd517cc9fef8bf32e41"} Feb 27 18:47:58 crc kubenswrapper[4981]: E0227 18:47:58.193117 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:58.69310076 +0000 UTC m=+178.171881920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.213194 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2vmct" podStartSLOduration=114.213180779 podStartE2EDuration="1m54.213180779s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:58.212915931 +0000 UTC m=+177.691697091" watchObservedRunningTime="2026-02-27 18:47:58.213180779 +0000 UTC m=+177.691961939" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.213210 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" event={"ID":"ef632318-2ac5-418d-b9d4-dcd616b4d768","Type":"ContainerStarted","Data":"34b5f6a6014362fa4ca77f038657ff566565bf3181688e1629ac15b25e8a622a"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.214241 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.214497 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-v46wf" podStartSLOduration=114.214490539 podStartE2EDuration="1m54.214490539s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:58.1640398 +0000 UTC m=+177.642820960" watchObservedRunningTime="2026-02-27 18:47:58.214490539 +0000 UTC m=+177.693271699" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.225246 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7hs2" event={"ID":"450816f5-eb2f-44e6-9b62-fd3f3b2fbf48","Type":"ContainerStarted","Data":"769e10349c380731c8cd8620341376617a042655d93ff059fee99e9f7eb72e00"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.225493 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7hs2" event={"ID":"450816f5-eb2f-44e6-9b62-fd3f3b2fbf48","Type":"ContainerStarted","Data":"f5d2d2642ecc564bf9db50440d7db4d7f5c5e955c6e0ea57f53239d241d4f462"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.230138 4981 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pcmgp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.230195 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" podUID="ef632318-2ac5-418d-b9d4-dcd616b4d768" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.235419 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" event={"ID":"a086194a-cb30-43f5-8cf2-f67d0f85a36d","Type":"ContainerStarted","Data":"e52c168c9c280b8130c82f2ee420f59bb5a2849652c6439d7c6413af310455e4"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.250816 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-7mfkz" event={"ID":"981ad6df-4f80-446c-83a8-cf8e4bc7436d","Type":"ContainerStarted","Data":"21c9ea2d48b645d22805cf4b18571a5081ffbcd8516b4f59e1539819332ff247"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.251367 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-7mfkz" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.263801 4981 patch_prober.go:28] interesting pod/console-operator-58897d9998-7mfkz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.263868 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-7mfkz" podUID="981ad6df-4f80-446c-83a8-cf8e4bc7436d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.264542 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-c4m77" event={"ID":"3667fc6c-078a-4be4-95ff-7174c74faf2c","Type":"ContainerStarted","Data":"4aba3365949c213515f4e26cd2b95c66c0d3ca9524e04f89240405c44aadd562"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.268504 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xnkzj" event={"ID":"9904cc6d-0e10-4a0b-bd3b-c0e6592b4856","Type":"ContainerStarted","Data":"a48fbddde02e8928f11c069bac2a8c673b5abcfe5e032e8d6c796de1e6b57b68"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.268546 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xnkzj" event={"ID":"9904cc6d-0e10-4a0b-bd3b-c0e6592b4856","Type":"ContainerStarted","Data":"20afa47281ae9898623045d31ae3ac8bf6459186e312c63a0392a3d11679b1e2"} Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.269586 4981 patch_prober.go:28] interesting pod/downloads-7954f5f757-t9grl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.269647 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t9grl" podUID="32d9179a-38c6-482f-95be-c94b48b83856" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.277582 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" podStartSLOduration=114.277568561 podStartE2EDuration="1m54.277568561s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:58.276944491 +0000 UTC m=+177.755725651" watchObservedRunningTime="2026-02-27 18:47:58.277568561 +0000 UTC m=+177.756349721" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.278270 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.278694 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-gv2d7" podStartSLOduration=6.278689675 podStartE2EDuration="6.278689675s" podCreationTimestamp="2026-02-27 18:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:58.243022603 +0000 UTC m=+177.721803763" watchObservedRunningTime="2026-02-27 18:47:58.278689675 +0000 UTC m=+177.757470835" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.289139 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-85vdw" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.294392 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:58 crc kubenswrapper[4981]: E0227 18:47:58.296859 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:58.796845375 +0000 UTC m=+178.275626535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.297275 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-hx8gq" podStartSLOduration=114.297260928 podStartE2EDuration="1m54.297260928s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:58.292529394 +0000 UTC m=+177.771310554" watchObservedRunningTime="2026-02-27 18:47:58.297260928 +0000 UTC m=+177.776042088" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.347465 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-c4m77" podStartSLOduration=114.347446649 podStartE2EDuration="1m54.347446649s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:58.34317255 +0000 UTC m=+177.821953730" watchObservedRunningTime="2026-02-27 18:47:58.347446649 +0000 UTC m=+177.826227809" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.349035 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-7mfkz" podStartSLOduration=115.349029166 podStartE2EDuration="1m55.349029166s" podCreationTimestamp="2026-02-27 18:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:58.325287727 +0000 UTC m=+177.804068877" watchObservedRunningTime="2026-02-27 18:47:58.349029166 +0000 UTC m=+177.827810326" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.375205 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-xnkzj" podStartSLOduration=114.37518829 podStartE2EDuration="1m54.37518829s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:58.368135126 +0000 UTC m=+177.846916286" watchObservedRunningTime="2026-02-27 18:47:58.37518829 +0000 UTC m=+177.853969450" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.395401 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:58 crc kubenswrapper[4981]: E0227 18:47:58.395675 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:58.89565978 +0000 UTC m=+178.374440940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.398830 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:58 crc kubenswrapper[4981]: E0227 18:47:58.407497 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:58.907470239 +0000 UTC m=+178.386251399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.455201 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2" podStartSLOduration=114.455179535 podStartE2EDuration="1m54.455179535s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:58.418260376 +0000 UTC m=+177.897041536" watchObservedRunningTime="2026-02-27 18:47:58.455179535 +0000 UTC m=+177.933960695" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.491925 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-v5c96" podStartSLOduration=114.491908537 podStartE2EDuration="1m54.491908537s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:58.440150239 +0000 UTC m=+177.918931399" watchObservedRunningTime="2026-02-27 18:47:58.491908537 +0000 UTC m=+177.970689697" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.499643 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.500118 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:58 crc kubenswrapper[4981]: E0227 18:47:58.501032 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:59.001015274 +0000 UTC m=+178.479796434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.529125 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqktd" podStartSLOduration=114.529112146 podStartE2EDuration="1m54.529112146s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:58.52725887 +0000 UTC m=+178.006040030" watchObservedRunningTime="2026-02-27 18:47:58.529112146 +0000 UTC m=+178.007893306" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.581667 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" podStartSLOduration=114.581647928 podStartE2EDuration="1m54.581647928s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:58.580517884 +0000 UTC m=+178.059299054" watchObservedRunningTime="2026-02-27 18:47:58.581647928 +0000 UTC m=+178.060429078" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.582756 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-5pgcp" podStartSLOduration=114.582751622 podStartE2EDuration="1m54.582751622s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:58.551198415 +0000 UTC m=+178.029979575" watchObservedRunningTime="2026-02-27 18:47:58.582751622 +0000 UTC m=+178.061532772" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.602185 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:58 crc kubenswrapper[4981]: E0227 18:47:58.602502 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:59.10249207 +0000 UTC m=+178.581273230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.608463 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" podStartSLOduration=115.608450471 podStartE2EDuration="1m55.608450471s" podCreationTimestamp="2026-02-27 18:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:58.606968705 +0000 UTC m=+178.085749865" watchObservedRunningTime="2026-02-27 18:47:58.608450471 +0000 UTC m=+178.087231631" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.657554 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7hs2" podStartSLOduration=115.657541829 podStartE2EDuration="1m55.657541829s" podCreationTimestamp="2026-02-27 18:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:47:58.654830916 +0000 UTC m=+178.133612076" watchObservedRunningTime="2026-02-27 18:47:58.657541829 +0000 UTC m=+178.136322989" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.702664 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:58 crc kubenswrapper[4981]: E0227 18:47:58.702792 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:59.202760669 +0000 UTC m=+178.681541829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.702864 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:58 crc kubenswrapper[4981]: E0227 18:47:58.703250 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:59.203242864 +0000 UTC m=+178.682024024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.798334 4981 ???:1] "http: TLS handshake error from 192.168.126.11:57378: no serving certificate available for the kubelet" Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.803943 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:58 crc kubenswrapper[4981]: E0227 18:47:58.804129 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:59.304105232 +0000 UTC m=+178.782886392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.804298 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:58 crc kubenswrapper[4981]: E0227 18:47:58.804604 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:59.304592437 +0000 UTC m=+178.783373597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.905005 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:58 crc kubenswrapper[4981]: E0227 18:47:58.905193 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:59.405167415 +0000 UTC m=+178.883948565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.905300 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:58 crc kubenswrapper[4981]: E0227 18:47:58.905597 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:59.405584658 +0000 UTC m=+178.884365818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.936084 4981 patch_prober.go:28] interesting pod/router-default-5444994796-r858c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 18:47:58 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Feb 27 18:47:58 crc kubenswrapper[4981]: [+]process-running ok Feb 27 18:47:58 crc kubenswrapper[4981]: healthz check failed Feb 27 18:47:58 crc kubenswrapper[4981]: I0227 18:47:58.936155 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-r858c" podUID="2d1ccfd2-99ab-4caf-82d3-6b58656de39f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.006549 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:59 crc kubenswrapper[4981]: E0227 18:47:59.006734 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:59.506707134 +0000 UTC m=+178.985488294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.006978 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:59 crc kubenswrapper[4981]: E0227 18:47:59.007338 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:59.507321571 +0000 UTC m=+178.986102721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.107775 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:59 crc kubenswrapper[4981]: E0227 18:47:59.107923 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:59.60789794 +0000 UTC m=+179.086679100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.108228 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:59 crc kubenswrapper[4981]: E0227 18:47:59.108572 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:59.608562941 +0000 UTC m=+179.087344101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.209121 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:59 crc kubenswrapper[4981]: E0227 18:47:59.209312 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:59.709284854 +0000 UTC m=+179.188066014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.209420 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:59 crc kubenswrapper[4981]: E0227 18:47:59.209731 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:59.709723167 +0000 UTC m=+179.188504327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.274012 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" event={"ID":"deda0ab1-f81e-4898-b4cb-5627947b5ed4","Type":"ContainerStarted","Data":"ad2332688072979e3ea7f32ccc9213575b2daaeba36137c64904e6a70f53c491"} Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.275702 4981 generic.go:334] "Generic (PLEG): container finished" podID="b7d035ce-f026-4668-9fca-c344f1fe60e3" containerID="683f3f2f0021bfd989263a07a7591bc65893d2e805b7474481414b3ad12cfc72" exitCode=0 Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.275744 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2" event={"ID":"b7d035ce-f026-4668-9fca-c344f1fe60e3","Type":"ContainerDied","Data":"683f3f2f0021bfd989263a07a7591bc65893d2e805b7474481414b3ad12cfc72"} Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.277476 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-c4m77" event={"ID":"3667fc6c-078a-4be4-95ff-7174c74faf2c","Type":"ContainerStarted","Data":"f24d80407c29f8b5b55a6ca02edab0818428002572f722fe477f62ee4b0a89a6"} Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.278798 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" event={"ID":"f5ecc4da-0cfa-4632-8478-e48a3c8aba36","Type":"ContainerStarted","Data":"6eccdb4e9e5a7aa4854a0905f517cd327a45aa1e834da2d20af6dbcd8769c1b2"} Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.279332 4981 patch_prober.go:28] interesting pod/console-operator-58897d9998-7mfkz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.279385 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-7mfkz" podUID="981ad6df-4f80-446c-83a8-cf8e4bc7436d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.280412 4981 patch_prober.go:28] interesting pod/downloads-7954f5f757-t9grl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.280508 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t9grl" podUID="32d9179a-38c6-482f-95be-c94b48b83856" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.280452 4981 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pcmgp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.280577 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" podUID="ef632318-2ac5-418d-b9d4-dcd616b4d768" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.294464 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-h8fzp" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.310182 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:59 crc kubenswrapper[4981]: E0227 18:47:59.310385 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:59.810355248 +0000 UTC m=+179.289136408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.310441 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:59 crc kubenswrapper[4981]: E0227 18:47:59.310743 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:59.810730849 +0000 UTC m=+179.289512009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.314466 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g2q7p" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.412046 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:59 crc kubenswrapper[4981]: E0227 18:47:59.412196 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:47:59.912163204 +0000 UTC m=+179.390944364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.412384 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:59 crc kubenswrapper[4981]: E0227 18:47:59.413136 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:47:59.913128483 +0000 UTC m=+179.391909643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.495166 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.495219 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.516886 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:59 crc kubenswrapper[4981]: E0227 18:47:59.517082 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:48:00.017042243 +0000 UTC m=+179.495823403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.566542 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m9ppw"] Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.567493 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m9ppw" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.572495 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.592317 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m9ppw"] Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.622636 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:59 crc kubenswrapper[4981]: E0227 18:47:59.622958 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:48:00.122947223 +0000 UTC m=+179.601728383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.723182 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.723627 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3bd579c-4d5b-496d-bade-9a78e439970d-catalog-content\") pod \"certified-operators-m9ppw\" (UID: \"e3bd579c-4d5b-496d-bade-9a78e439970d\") " pod="openshift-marketplace/certified-operators-m9ppw" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.723747 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lmqj\" (UniqueName: \"kubernetes.io/projected/e3bd579c-4d5b-496d-bade-9a78e439970d-kube-api-access-5lmqj\") pod \"certified-operators-m9ppw\" (UID: \"e3bd579c-4d5b-496d-bade-9a78e439970d\") " pod="openshift-marketplace/certified-operators-m9ppw" Feb 27 18:47:59 crc kubenswrapper[4981]: E0227 18:47:59.723790 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:48:00.223763419 +0000 UTC m=+179.702544579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.723877 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3bd579c-4d5b-496d-bade-9a78e439970d-utilities\") pod \"certified-operators-m9ppw\" (UID: \"e3bd579c-4d5b-496d-bade-9a78e439970d\") " pod="openshift-marketplace/certified-operators-m9ppw" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.738611 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fzncx"] Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.739469 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fzncx" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.744439 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.768809 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fzncx"] Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.824984 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d010a2-1cec-4e71-ac60-29b2e20787f4-catalog-content\") pod \"community-operators-fzncx\" (UID: \"a8d010a2-1cec-4e71-ac60-29b2e20787f4\") " pod="openshift-marketplace/community-operators-fzncx" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.825029 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.825067 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3bd579c-4d5b-496d-bade-9a78e439970d-catalog-content\") pod \"certified-operators-m9ppw\" (UID: \"e3bd579c-4d5b-496d-bade-9a78e439970d\") " pod="openshift-marketplace/certified-operators-m9ppw" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.825092 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lmqj\" (UniqueName: \"kubernetes.io/projected/e3bd579c-4d5b-496d-bade-9a78e439970d-kube-api-access-5lmqj\") pod \"certified-operators-m9ppw\" (UID: \"e3bd579c-4d5b-496d-bade-9a78e439970d\") " pod="openshift-marketplace/certified-operators-m9ppw" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.825118 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3bd579c-4d5b-496d-bade-9a78e439970d-utilities\") pod \"certified-operators-m9ppw\" (UID: \"e3bd579c-4d5b-496d-bade-9a78e439970d\") " pod="openshift-marketplace/certified-operators-m9ppw" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.825139 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d010a2-1cec-4e71-ac60-29b2e20787f4-utilities\") pod \"community-operators-fzncx\" (UID: \"a8d010a2-1cec-4e71-ac60-29b2e20787f4\") " pod="openshift-marketplace/community-operators-fzncx" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.825183 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb2rc\" (UniqueName: \"kubernetes.io/projected/a8d010a2-1cec-4e71-ac60-29b2e20787f4-kube-api-access-bb2rc\") pod \"community-operators-fzncx\" (UID: \"a8d010a2-1cec-4e71-ac60-29b2e20787f4\") " pod="openshift-marketplace/community-operators-fzncx" Feb 27 18:47:59 crc kubenswrapper[4981]: E0227 18:47:59.825447 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:48:00.325436212 +0000 UTC m=+179.804217362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.825884 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3bd579c-4d5b-496d-bade-9a78e439970d-catalog-content\") pod \"certified-operators-m9ppw\" (UID: \"e3bd579c-4d5b-496d-bade-9a78e439970d\") " pod="openshift-marketplace/certified-operators-m9ppw" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.826344 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3bd579c-4d5b-496d-bade-9a78e439970d-utilities\") pod \"certified-operators-m9ppw\" (UID: \"e3bd579c-4d5b-496d-bade-9a78e439970d\") " pod="openshift-marketplace/certified-operators-m9ppw" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.869691 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lmqj\" (UniqueName: \"kubernetes.io/projected/e3bd579c-4d5b-496d-bade-9a78e439970d-kube-api-access-5lmqj\") pod \"certified-operators-m9ppw\" (UID: \"e3bd579c-4d5b-496d-bade-9a78e439970d\") " pod="openshift-marketplace/certified-operators-m9ppw" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.879395 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m9ppw" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.905136 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.909291 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.925646 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.925852 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb2rc\" (UniqueName: \"kubernetes.io/projected/a8d010a2-1cec-4e71-ac60-29b2e20787f4-kube-api-access-bb2rc\") pod \"community-operators-fzncx\" (UID: \"a8d010a2-1cec-4e71-ac60-29b2e20787f4\") " pod="openshift-marketplace/community-operators-fzncx" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.925911 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d010a2-1cec-4e71-ac60-29b2e20787f4-catalog-content\") pod \"community-operators-fzncx\" (UID: \"a8d010a2-1cec-4e71-ac60-29b2e20787f4\") " pod="openshift-marketplace/community-operators-fzncx" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.925974 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d010a2-1cec-4e71-ac60-29b2e20787f4-utilities\") pod \"community-operators-fzncx\" (UID: \"a8d010a2-1cec-4e71-ac60-29b2e20787f4\") " pod="openshift-marketplace/community-operators-fzncx" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.926379 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d010a2-1cec-4e71-ac60-29b2e20787f4-utilities\") pod \"community-operators-fzncx\" (UID: \"a8d010a2-1cec-4e71-ac60-29b2e20787f4\") " pod="openshift-marketplace/community-operators-fzncx" Feb 27 18:47:59 crc kubenswrapper[4981]: E0227 18:47:59.926454 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:48:00.426439713 +0000 UTC m=+179.905220873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.926875 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d010a2-1cec-4e71-ac60-29b2e20787f4-catalog-content\") pod \"community-operators-fzncx\" (UID: \"a8d010a2-1cec-4e71-ac60-29b2e20787f4\") " pod="openshift-marketplace/community-operators-fzncx" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.936811 4981 patch_prober.go:28] interesting pod/router-default-5444994796-r858c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 18:47:59 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Feb 27 18:47:59 crc kubenswrapper[4981]: [+]process-running ok Feb 27 18:47:59 crc kubenswrapper[4981]: healthz check failed Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.936859 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-r858c" podUID="2d1ccfd2-99ab-4caf-82d3-6b58656de39f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.955567 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rvh5h"] Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.956468 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvh5h" Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.977597 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvh5h"] Feb 27 18:47:59 crc kubenswrapper[4981]: I0227 18:47:59.998283 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb2rc\" (UniqueName: \"kubernetes.io/projected/a8d010a2-1cec-4e71-ac60-29b2e20787f4-kube-api-access-bb2rc\") pod \"community-operators-fzncx\" (UID: \"a8d010a2-1cec-4e71-ac60-29b2e20787f4\") " pod="openshift-marketplace/community-operators-fzncx" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.030944 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:48:00 crc kubenswrapper[4981]: E0227 18:48:00.031224 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:48:00.53121165 +0000 UTC m=+180.009992810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.053624 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fzncx" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.100324 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-plxr2" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.131656 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.131979 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vg7g\" (UniqueName: \"kubernetes.io/projected/5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3-kube-api-access-7vg7g\") pod \"certified-operators-rvh5h\" (UID: \"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3\") " pod="openshift-marketplace/certified-operators-rvh5h" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.132015 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3-utilities\") pod \"certified-operators-rvh5h\" (UID: \"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3\") " pod="openshift-marketplace/certified-operators-rvh5h" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.132033 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3-catalog-content\") pod \"certified-operators-rvh5h\" (UID: \"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3\") " pod="openshift-marketplace/certified-operators-rvh5h" Feb 27 18:48:00 crc kubenswrapper[4981]: E0227 18:48:00.132144 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:48:00.632129789 +0000 UTC m=+180.110910949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.158910 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rhmlr"] Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.159766 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rhmlr" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.184305 4981 ???:1] "http: TLS handshake error from 192.168.126.11:57390: no serving certificate available for the kubelet" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.200504 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rhmlr"] Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.200549 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536968-jn8tc"] Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.201108 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536968-jn8tc" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.208252 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.208460 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.208651 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.234707 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3-utilities\") pod \"certified-operators-rvh5h\" (UID: \"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3\") " pod="openshift-marketplace/certified-operators-rvh5h" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.234751 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3-catalog-content\") pod \"certified-operators-rvh5h\" (UID: \"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3\") " pod="openshift-marketplace/certified-operators-rvh5h" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.234843 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.234868 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vg7g\" (UniqueName: \"kubernetes.io/projected/5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3-kube-api-access-7vg7g\") pod \"certified-operators-rvh5h\" (UID: \"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3\") " pod="openshift-marketplace/certified-operators-rvh5h" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.235154 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536968-jn8tc"] Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.235657 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3-utilities\") pod \"certified-operators-rvh5h\" (UID: \"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3\") " pod="openshift-marketplace/certified-operators-rvh5h" Feb 27 18:48:00 crc kubenswrapper[4981]: E0227 18:48:00.235717 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:48:00.735701499 +0000 UTC m=+180.214482659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.239184 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3-catalog-content\") pod \"certified-operators-rvh5h\" (UID: \"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3\") " pod="openshift-marketplace/certified-operators-rvh5h" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.270876 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vg7g\" (UniqueName: \"kubernetes.io/projected/5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3-kube-api-access-7vg7g\") pod \"certified-operators-rvh5h\" (UID: \"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3\") " pod="openshift-marketplace/certified-operators-rvh5h" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.283561 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvh5h" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.290258 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.345593 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.345758 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31a25fb4-5131-45cb-a965-eebe7bcf6a5d-catalog-content\") pod \"community-operators-rhmlr\" (UID: \"31a25fb4-5131-45cb-a965-eebe7bcf6a5d\") " pod="openshift-marketplace/community-operators-rhmlr" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.345782 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cckb\" (UniqueName: \"kubernetes.io/projected/31a25fb4-5131-45cb-a965-eebe7bcf6a5d-kube-api-access-4cckb\") pod \"community-operators-rhmlr\" (UID: \"31a25fb4-5131-45cb-a965-eebe7bcf6a5d\") " pod="openshift-marketplace/community-operators-rhmlr" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.345834 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw79t\" (UniqueName: \"kubernetes.io/projected/8bff5a34-e6d7-482d-bed3-dfe5269b225a-kube-api-access-nw79t\") pod \"auto-csr-approver-29536968-jn8tc\" (UID: \"8bff5a34-e6d7-482d-bed3-dfe5269b225a\") " pod="openshift-infra/auto-csr-approver-29536968-jn8tc" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.345872 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31a25fb4-5131-45cb-a965-eebe7bcf6a5d-utilities\") pod \"community-operators-rhmlr\" (UID: \"31a25fb4-5131-45cb-a965-eebe7bcf6a5d\") " pod="openshift-marketplace/community-operators-rhmlr" Feb 27 18:48:00 crc kubenswrapper[4981]: E0227 18:48:00.345965 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:48:00.84595208 +0000 UTC m=+180.324733240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.377446 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" event={"ID":"f5ecc4da-0cfa-4632-8478-e48a3c8aba36","Type":"ContainerStarted","Data":"ef465b147032e17dd6d389991f5d75ba8fdf313fd7ec44b521af5cb34d469955"} Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.377523 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" event={"ID":"f5ecc4da-0cfa-4632-8478-e48a3c8aba36","Type":"ContainerStarted","Data":"44a090f2b8e55ef80ae578dce80ac7ef77a3c70112335ed911a824ed39114e01"} Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.386304 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-ckjzg" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.448616 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31a25fb4-5131-45cb-a965-eebe7bcf6a5d-utilities\") pod \"community-operators-rhmlr\" (UID: \"31a25fb4-5131-45cb-a965-eebe7bcf6a5d\") " pod="openshift-marketplace/community-operators-rhmlr" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.448698 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31a25fb4-5131-45cb-a965-eebe7bcf6a5d-catalog-content\") pod \"community-operators-rhmlr\" (UID: \"31a25fb4-5131-45cb-a965-eebe7bcf6a5d\") " pod="openshift-marketplace/community-operators-rhmlr" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.448716 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cckb\" (UniqueName: \"kubernetes.io/projected/31a25fb4-5131-45cb-a965-eebe7bcf6a5d-kube-api-access-4cckb\") pod \"community-operators-rhmlr\" (UID: \"31a25fb4-5131-45cb-a965-eebe7bcf6a5d\") " pod="openshift-marketplace/community-operators-rhmlr" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.448746 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.448775 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw79t\" (UniqueName: \"kubernetes.io/projected/8bff5a34-e6d7-482d-bed3-dfe5269b225a-kube-api-access-nw79t\") pod \"auto-csr-approver-29536968-jn8tc\" (UID: \"8bff5a34-e6d7-482d-bed3-dfe5269b225a\") " pod="openshift-infra/auto-csr-approver-29536968-jn8tc" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.449686 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31a25fb4-5131-45cb-a965-eebe7bcf6a5d-utilities\") pod \"community-operators-rhmlr\" (UID: \"31a25fb4-5131-45cb-a965-eebe7bcf6a5d\") " pod="openshift-marketplace/community-operators-rhmlr" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.449878 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31a25fb4-5131-45cb-a965-eebe7bcf6a5d-catalog-content\") pod \"community-operators-rhmlr\" (UID: \"31a25fb4-5131-45cb-a965-eebe7bcf6a5d\") " pod="openshift-marketplace/community-operators-rhmlr" Feb 27 18:48:00 crc kubenswrapper[4981]: E0227 18:48:00.450207 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:48:00.95019364 +0000 UTC m=+180.428974800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.493859 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cckb\" (UniqueName: \"kubernetes.io/projected/31a25fb4-5131-45cb-a965-eebe7bcf6a5d-kube-api-access-4cckb\") pod \"community-operators-rhmlr\" (UID: \"31a25fb4-5131-45cb-a965-eebe7bcf6a5d\") " pod="openshift-marketplace/community-operators-rhmlr" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.497166 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw79t\" (UniqueName: \"kubernetes.io/projected/8bff5a34-e6d7-482d-bed3-dfe5269b225a-kube-api-access-nw79t\") pod \"auto-csr-approver-29536968-jn8tc\" (UID: \"8bff5a34-e6d7-482d-bed3-dfe5269b225a\") " pod="openshift-infra/auto-csr-approver-29536968-jn8tc" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.522753 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536968-jn8tc" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.558724 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:48:00 crc kubenswrapper[4981]: E0227 18:48:00.559803 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:48:01.059781352 +0000 UTC m=+180.538562512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.665816 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:48:00 crc kubenswrapper[4981]: E0227 18:48:00.666100 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:48:01.166082445 +0000 UTC m=+180.644863605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.766891 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:48:00 crc kubenswrapper[4981]: E0227 18:48:00.767505 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:48:01.267489929 +0000 UTC m=+180.746271089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.767530 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fzncx"] Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.775279 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rhmlr" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.868210 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:48:00 crc kubenswrapper[4981]: E0227 18:48:00.868720 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:48:01.368708106 +0000 UTC m=+180.847489266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.954320 4981 patch_prober.go:28] interesting pod/router-default-5444994796-r858c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 18:48:00 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Feb 27 18:48:00 crc kubenswrapper[4981]: [+]process-running ok Feb 27 18:48:00 crc kubenswrapper[4981]: healthz check failed Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.954375 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-r858c" podUID="2d1ccfd2-99ab-4caf-82d3-6b58656de39f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 18:48:00 crc kubenswrapper[4981]: I0227 18:48:00.972306 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:48:00 crc kubenswrapper[4981]: E0227 18:48:00.972699 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:48:01.472680778 +0000 UTC m=+180.951461938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.035165 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m9ppw"] Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.048397 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.076833 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:48:01 crc kubenswrapper[4981]: E0227 18:48:01.077160 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:48:01.577147345 +0000 UTC m=+181.055928505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.169124 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rn6m5"] Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.179668 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.179755 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7d035ce-f026-4668-9fca-c344f1fe60e3-secret-volume\") pod \"b7d035ce-f026-4668-9fca-c344f1fe60e3\" (UID: \"b7d035ce-f026-4668-9fca-c344f1fe60e3\") " Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.179799 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7d035ce-f026-4668-9fca-c344f1fe60e3-config-volume\") pod \"b7d035ce-f026-4668-9fca-c344f1fe60e3\" (UID: \"b7d035ce-f026-4668-9fca-c344f1fe60e3\") " Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.179838 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th628\" (UniqueName: \"kubernetes.io/projected/b7d035ce-f026-4668-9fca-c344f1fe60e3-kube-api-access-th628\") pod \"b7d035ce-f026-4668-9fca-c344f1fe60e3\" (UID: \"b7d035ce-f026-4668-9fca-c344f1fe60e3\") " Feb 27 18:48:01 crc kubenswrapper[4981]: E0227 18:48:01.179899 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:48:01.679869569 +0000 UTC m=+181.158650719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.180067 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.180257 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536968-jn8tc"] Feb 27 18:48:01 crc kubenswrapper[4981]: E0227 18:48:01.180328 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:48:01.680317003 +0000 UTC m=+181.159098163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.180490 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7d035ce-f026-4668-9fca-c344f1fe60e3-config-volume" (OuterVolumeSpecName: "config-volume") pod "b7d035ce-f026-4668-9fca-c344f1fe60e3" (UID: "b7d035ce-f026-4668-9fca-c344f1fe60e3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.187263 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7d035ce-f026-4668-9fca-c344f1fe60e3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b7d035ce-f026-4668-9fca-c344f1fe60e3" (UID: "b7d035ce-f026-4668-9fca-c344f1fe60e3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.197505 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7d035ce-f026-4668-9fca-c344f1fe60e3-kube-api-access-th628" (OuterVolumeSpecName: "kube-api-access-th628") pod "b7d035ce-f026-4668-9fca-c344f1fe60e3" (UID: "b7d035ce-f026-4668-9fca-c344f1fe60e3"). InnerVolumeSpecName "kube-api-access-th628". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.203155 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv"] Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.203340 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" podUID="10e31a3f-eb88-4c8b-93e7-e251f762d29e" containerName="route-controller-manager" containerID="cri-o://0df29c09071a7b4da97a71a4491419a04da0e713c1a2aa146d54add6444af2f3" gracePeriod=30 Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.213362 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.215347 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvh5h"] Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.283603 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.283891 4981 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7d035ce-f026-4668-9fca-c344f1fe60e3-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.283905 4981 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7d035ce-f026-4668-9fca-c344f1fe60e3-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.283914 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th628\" (UniqueName: \"kubernetes.io/projected/b7d035ce-f026-4668-9fca-c344f1fe60e3-kube-api-access-th628\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:01 crc kubenswrapper[4981]: E0227 18:48:01.283974 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:48:01.783960394 +0000 UTC m=+181.262741554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.320064 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rhmlr"] Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.334792 4981 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.386153 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:48:01 crc kubenswrapper[4981]: E0227 18:48:01.386517 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:48:01.886504992 +0000 UTC m=+181.365286152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.391540 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2" event={"ID":"b7d035ce-f026-4668-9fca-c344f1fe60e3","Type":"ContainerDied","Data":"bb0f23f9b6b95712fecdac102863e27bfeb0b1653c9b43001a872ee7f4c8ff34"} Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.391565 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb0f23f9b6b95712fecdac102863e27bfeb0b1653c9b43001a872ee7f4c8ff34" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.391625 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.396028 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536968-jn8tc" event={"ID":"8bff5a34-e6d7-482d-bed3-dfe5269b225a","Type":"ContainerStarted","Data":"b0807f845fcbdd39b44ca659de0f6e0ae98829adde27abbc803ff9140fa7747a"} Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.396871 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvh5h" event={"ID":"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3","Type":"ContainerStarted","Data":"260e200f77b7d050bc98b85e3e9ba8dedee4f4294c2cd6bdd95b11dd5b190e83"} Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.397970 4981 generic.go:334] "Generic (PLEG): container finished" podID="10e31a3f-eb88-4c8b-93e7-e251f762d29e" containerID="0df29c09071a7b4da97a71a4491419a04da0e713c1a2aa146d54add6444af2f3" exitCode=0 Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.398042 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" event={"ID":"10e31a3f-eb88-4c8b-93e7-e251f762d29e","Type":"ContainerDied","Data":"0df29c09071a7b4da97a71a4491419a04da0e713c1a2aa146d54add6444af2f3"} Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.398908 4981 generic.go:334] "Generic (PLEG): container finished" podID="e3bd579c-4d5b-496d-bade-9a78e439970d" containerID="c34e40ab5ee7bba2243e1d19cb08fff5401ef6fcc74e4c7cb14b00fded1ffd44" exitCode=0 Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.398948 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m9ppw" event={"ID":"e3bd579c-4d5b-496d-bade-9a78e439970d","Type":"ContainerDied","Data":"c34e40ab5ee7bba2243e1d19cb08fff5401ef6fcc74e4c7cb14b00fded1ffd44"} Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.398962 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m9ppw" event={"ID":"e3bd579c-4d5b-496d-bade-9a78e439970d","Type":"ContainerStarted","Data":"6da68719a15ea42886b03e2b321b562377ca489a9a8128a2e8eeb35efc11eef4"} Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.400278 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rhmlr" event={"ID":"31a25fb4-5131-45cb-a965-eebe7bcf6a5d","Type":"ContainerStarted","Data":"ae3de0f3d76ae351393a25e06d3c5341bf0b5813364a807a8b92cd398acad420"} Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.401527 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" event={"ID":"f5ecc4da-0cfa-4632-8478-e48a3c8aba36","Type":"ContainerStarted","Data":"221e4f8e0b710dd3a1fadfb2fc26c247b6edf6c0dea6a101b25fa6bc72908036"} Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.402564 4981 generic.go:334] "Generic (PLEG): container finished" podID="a8d010a2-1cec-4e71-ac60-29b2e20787f4" containerID="18aabbeb99493219fdc229ebdaac94be7dc1cedf4fa9236d8443cbb581db4a7c" exitCode=0 Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.403190 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fzncx" event={"ID":"a8d010a2-1cec-4e71-ac60-29b2e20787f4","Type":"ContainerDied","Data":"18aabbeb99493219fdc229ebdaac94be7dc1cedf4fa9236d8443cbb581db4a7c"} Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.403269 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fzncx" event={"ID":"a8d010a2-1cec-4e71-ac60-29b2e20787f4","Type":"ContainerStarted","Data":"7f92747db14f2414b7569e5d0c179f323a609473464cd4e87055d3fd461a6ee1"} Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.403631 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" podUID="89fbbf29-4a7d-40d3-ad12-9e1111396e8d" containerName="controller-manager" containerID="cri-o://6c70de6feb244f9cd8ccc039ed9a2e77eb002500df9fabcd1b45d5c1baba4187" gracePeriod=30 Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.474641 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" podStartSLOduration=9.474617764 podStartE2EDuration="9.474617764s" podCreationTimestamp="2026-02-27 18:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:48:01.46920687 +0000 UTC m=+180.947988030" watchObservedRunningTime="2026-02-27 18:48:01.474617764 +0000 UTC m=+180.953398924" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.487695 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:48:01 crc kubenswrapper[4981]: E0227 18:48:01.487894 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:48:01.987863455 +0000 UTC m=+181.466644615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.488145 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:48:01 crc kubenswrapper[4981]: E0227 18:48:01.488528 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:48:01.988521245 +0000 UTC m=+181.467302405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.533300 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6nhjk"] Feb 27 18:48:01 crc kubenswrapper[4981]: E0227 18:48:01.533492 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d035ce-f026-4668-9fca-c344f1fe60e3" containerName="collect-profiles" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.533508 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d035ce-f026-4668-9fca-c344f1fe60e3" containerName="collect-profiles" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.533625 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d035ce-f026-4668-9fca-c344f1fe60e3" containerName="collect-profiles" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.546621 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6nhjk" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.549224 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6nhjk"] Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.558485 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.574360 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.589623 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:48:01 crc kubenswrapper[4981]: E0227 18:48:01.589910 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:48:02.089886198 +0000 UTC m=+181.568667348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.590004 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:48:01 crc kubenswrapper[4981]: E0227 18:48:01.591612 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:48:02.0915969 +0000 UTC m=+181.570378060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.690913 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10e31a3f-eb88-4c8b-93e7-e251f762d29e-config\") pod \"10e31a3f-eb88-4c8b-93e7-e251f762d29e\" (UID: \"10e31a3f-eb88-4c8b-93e7-e251f762d29e\") " Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.691036 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.691076 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10e31a3f-eb88-4c8b-93e7-e251f762d29e-client-ca\") pod \"10e31a3f-eb88-4c8b-93e7-e251f762d29e\" (UID: \"10e31a3f-eb88-4c8b-93e7-e251f762d29e\") " Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.691142 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgbg7\" (UniqueName: \"kubernetes.io/projected/10e31a3f-eb88-4c8b-93e7-e251f762d29e-kube-api-access-lgbg7\") pod \"10e31a3f-eb88-4c8b-93e7-e251f762d29e\" (UID: \"10e31a3f-eb88-4c8b-93e7-e251f762d29e\") " Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.691223 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10e31a3f-eb88-4c8b-93e7-e251f762d29e-serving-cert\") pod \"10e31a3f-eb88-4c8b-93e7-e251f762d29e\" (UID: \"10e31a3f-eb88-4c8b-93e7-e251f762d29e\") " Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.691402 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afecaba0-c366-4a2f-a944-1a282869a955-catalog-content\") pod \"redhat-marketplace-6nhjk\" (UID: \"afecaba0-c366-4a2f-a944-1a282869a955\") " pod="openshift-marketplace/redhat-marketplace-6nhjk" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.691432 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afecaba0-c366-4a2f-a944-1a282869a955-utilities\") pod \"redhat-marketplace-6nhjk\" (UID: \"afecaba0-c366-4a2f-a944-1a282869a955\") " pod="openshift-marketplace/redhat-marketplace-6nhjk" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.691507 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhr5l\" (UniqueName: \"kubernetes.io/projected/afecaba0-c366-4a2f-a944-1a282869a955-kube-api-access-nhr5l\") pod \"redhat-marketplace-6nhjk\" (UID: \"afecaba0-c366-4a2f-a944-1a282869a955\") " pod="openshift-marketplace/redhat-marketplace-6nhjk" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.692478 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10e31a3f-eb88-4c8b-93e7-e251f762d29e-config" (OuterVolumeSpecName: "config") pod "10e31a3f-eb88-4c8b-93e7-e251f762d29e" (UID: "10e31a3f-eb88-4c8b-93e7-e251f762d29e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:48:01 crc kubenswrapper[4981]: E0227 18:48:01.692547 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 18:48:02.19253399 +0000 UTC m=+181.671315150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.692903 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10e31a3f-eb88-4c8b-93e7-e251f762d29e-client-ca" (OuterVolumeSpecName: "client-ca") pod "10e31a3f-eb88-4c8b-93e7-e251f762d29e" (UID: "10e31a3f-eb88-4c8b-93e7-e251f762d29e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.712348 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10e31a3f-eb88-4c8b-93e7-e251f762d29e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "10e31a3f-eb88-4c8b-93e7-e251f762d29e" (UID: "10e31a3f-eb88-4c8b-93e7-e251f762d29e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.715009 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10e31a3f-eb88-4c8b-93e7-e251f762d29e-kube-api-access-lgbg7" (OuterVolumeSpecName: "kube-api-access-lgbg7") pod "10e31a3f-eb88-4c8b-93e7-e251f762d29e" (UID: "10e31a3f-eb88-4c8b-93e7-e251f762d29e"). InnerVolumeSpecName "kube-api-access-lgbg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.719501 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 18:48:01 crc kubenswrapper[4981]: E0227 18:48:01.719700 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e31a3f-eb88-4c8b-93e7-e251f762d29e" containerName="route-controller-manager" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.719716 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e31a3f-eb88-4c8b-93e7-e251f762d29e" containerName="route-controller-manager" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.719819 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="10e31a3f-eb88-4c8b-93e7-e251f762d29e" containerName="route-controller-manager" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.720985 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.723468 4981 patch_prober.go:28] interesting pod/apiserver-76f77b778f-g8dqb container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 27 18:48:01 crc kubenswrapper[4981]: [+]log ok Feb 27 18:48:01 crc kubenswrapper[4981]: [+]etcd ok Feb 27 18:48:01 crc kubenswrapper[4981]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 27 18:48:01 crc kubenswrapper[4981]: [+]poststarthook/generic-apiserver-start-informers ok Feb 27 18:48:01 crc kubenswrapper[4981]: [+]poststarthook/max-in-flight-filter ok Feb 27 18:48:01 crc kubenswrapper[4981]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 27 18:48:01 crc kubenswrapper[4981]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 27 18:48:01 crc kubenswrapper[4981]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 27 18:48:01 crc kubenswrapper[4981]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 27 18:48:01 crc kubenswrapper[4981]: [+]poststarthook/project.openshift.io-projectcache ok Feb 27 18:48:01 crc kubenswrapper[4981]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 27 18:48:01 crc kubenswrapper[4981]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Feb 27 18:48:01 crc kubenswrapper[4981]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 27 18:48:01 crc kubenswrapper[4981]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 27 18:48:01 crc kubenswrapper[4981]: livez check failed Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.723545 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" podUID="deda0ab1-f81e-4898-b4cb-5627947b5ed4" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.724252 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.724489 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.741409 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.792413 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afecaba0-c366-4a2f-a944-1a282869a955-catalog-content\") pod \"redhat-marketplace-6nhjk\" (UID: \"afecaba0-c366-4a2f-a944-1a282869a955\") " pod="openshift-marketplace/redhat-marketplace-6nhjk" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.792455 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afecaba0-c366-4a2f-a944-1a282869a955-utilities\") pod \"redhat-marketplace-6nhjk\" (UID: \"afecaba0-c366-4a2f-a944-1a282869a955\") " pod="openshift-marketplace/redhat-marketplace-6nhjk" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.792518 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.792557 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhr5l\" (UniqueName: \"kubernetes.io/projected/afecaba0-c366-4a2f-a944-1a282869a955-kube-api-access-nhr5l\") pod \"redhat-marketplace-6nhjk\" (UID: \"afecaba0-c366-4a2f-a944-1a282869a955\") " pod="openshift-marketplace/redhat-marketplace-6nhjk" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.792619 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10e31a3f-eb88-4c8b-93e7-e251f762d29e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.792630 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10e31a3f-eb88-4c8b-93e7-e251f762d29e-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.792639 4981 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10e31a3f-eb88-4c8b-93e7-e251f762d29e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.792648 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgbg7\" (UniqueName: \"kubernetes.io/projected/10e31a3f-eb88-4c8b-93e7-e251f762d29e-kube-api-access-lgbg7\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.792859 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afecaba0-c366-4a2f-a944-1a282869a955-catalog-content\") pod \"redhat-marketplace-6nhjk\" (UID: \"afecaba0-c366-4a2f-a944-1a282869a955\") " pod="openshift-marketplace/redhat-marketplace-6nhjk" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.791604 4981 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-27T18:48:01.334811636Z","Handler":null,"Name":""} Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.792977 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afecaba0-c366-4a2f-a944-1a282869a955-utilities\") pod \"redhat-marketplace-6nhjk\" (UID: \"afecaba0-c366-4a2f-a944-1a282869a955\") " pod="openshift-marketplace/redhat-marketplace-6nhjk" Feb 27 18:48:01 crc kubenswrapper[4981]: E0227 18:48:01.793151 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 18:48:02.293139159 +0000 UTC m=+181.771920319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-kmvhm" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.806104 4981 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.806137 4981 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.808408 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhr5l\" (UniqueName: \"kubernetes.io/projected/afecaba0-c366-4a2f-a944-1a282869a955-kube-api-access-nhr5l\") pod \"redhat-marketplace-6nhjk\" (UID: \"afecaba0-c366-4a2f-a944-1a282869a955\") " pod="openshift-marketplace/redhat-marketplace-6nhjk" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.860422 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6nhjk" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.893837 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.894198 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.894254 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.901411 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.937828 4981 patch_prober.go:28] interesting pod/router-default-5444994796-r858c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 18:48:01 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Feb 27 18:48:01 crc kubenswrapper[4981]: [+]process-running ok Feb 27 18:48:01 crc kubenswrapper[4981]: healthz check failed Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.937868 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-r858c" podUID="2d1ccfd2-99ab-4caf-82d3-6b58656de39f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.942444 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6w9qb"] Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.943931 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6w9qb" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.951230 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6w9qb"] Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.995732 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.995809 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.995860 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.995910 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.997975 4981 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 18:48:01 crc kubenswrapper[4981]: I0227 18:48:01.998022 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.023042 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.040550 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-kmvhm\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.081482 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.089340 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.098637 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drxn7\" (UniqueName: \"kubernetes.io/projected/b0d12f02-fe5f-4ca7-a190-852ad6284190-kube-api-access-drxn7\") pod \"redhat-marketplace-6w9qb\" (UID: \"b0d12f02-fe5f-4ca7-a190-852ad6284190\") " pod="openshift-marketplace/redhat-marketplace-6w9qb" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.098708 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d12f02-fe5f-4ca7-a190-852ad6284190-utilities\") pod \"redhat-marketplace-6w9qb\" (UID: \"b0d12f02-fe5f-4ca7-a190-852ad6284190\") " pod="openshift-marketplace/redhat-marketplace-6w9qb" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.098759 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d12f02-fe5f-4ca7-a190-852ad6284190-catalog-content\") pod \"redhat-marketplace-6w9qb\" (UID: \"b0d12f02-fe5f-4ca7-a190-852ad6284190\") " pod="openshift-marketplace/redhat-marketplace-6w9qb" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.101814 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98"] Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.102593 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.106610 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.109478 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6nhjk"] Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.114235 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98"] Feb 27 18:48:02 crc kubenswrapper[4981]: W0227 18:48:02.119715 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafecaba0_c366_4a2f_a944_1a282869a955.slice/crio-3e7713dd77c0d3d4d940d135824afd0d7643c141d8d71b4e397a28d6ff4687a9 WatchSource:0}: Error finding container 3e7713dd77c0d3d4d940d135824afd0d7643c141d8d71b4e397a28d6ff4687a9: Status 404 returned error can't find the container with id 3e7713dd77c0d3d4d940d135824afd0d7643c141d8d71b4e397a28d6ff4687a9 Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.199921 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d12f02-fe5f-4ca7-a190-852ad6284190-utilities\") pod \"redhat-marketplace-6w9qb\" (UID: \"b0d12f02-fe5f-4ca7-a190-852ad6284190\") " pod="openshift-marketplace/redhat-marketplace-6w9qb" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.200244 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-serving-cert\") pod \"route-controller-manager-79c944dc4f-86j98\" (UID: \"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da\") " pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.200283 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-client-ca\") pod \"route-controller-manager-79c944dc4f-86j98\" (UID: \"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da\") " pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.200326 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-config\") pod \"route-controller-manager-79c944dc4f-86j98\" (UID: \"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da\") " pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.200349 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d12f02-fe5f-4ca7-a190-852ad6284190-catalog-content\") pod \"redhat-marketplace-6w9qb\" (UID: \"b0d12f02-fe5f-4ca7-a190-852ad6284190\") " pod="openshift-marketplace/redhat-marketplace-6w9qb" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.200405 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drxn7\" (UniqueName: \"kubernetes.io/projected/b0d12f02-fe5f-4ca7-a190-852ad6284190-kube-api-access-drxn7\") pod \"redhat-marketplace-6w9qb\" (UID: \"b0d12f02-fe5f-4ca7-a190-852ad6284190\") " pod="openshift-marketplace/redhat-marketplace-6w9qb" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.200429 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw5ss\" (UniqueName: \"kubernetes.io/projected/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-kube-api-access-hw5ss\") pod \"route-controller-manager-79c944dc4f-86j98\" (UID: \"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da\") " pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.200654 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d12f02-fe5f-4ca7-a190-852ad6284190-utilities\") pod \"redhat-marketplace-6w9qb\" (UID: \"b0d12f02-fe5f-4ca7-a190-852ad6284190\") " pod="openshift-marketplace/redhat-marketplace-6w9qb" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.200837 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d12f02-fe5f-4ca7-a190-852ad6284190-catalog-content\") pod \"redhat-marketplace-6w9qb\" (UID: \"b0d12f02-fe5f-4ca7-a190-852ad6284190\") " pod="openshift-marketplace/redhat-marketplace-6w9qb" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.224776 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drxn7\" (UniqueName: \"kubernetes.io/projected/b0d12f02-fe5f-4ca7-a190-852ad6284190-kube-api-access-drxn7\") pod \"redhat-marketplace-6w9qb\" (UID: \"b0d12f02-fe5f-4ca7-a190-852ad6284190\") " pod="openshift-marketplace/redhat-marketplace-6w9qb" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.290428 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.295986 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6w9qb" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.301767 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw5ss\" (UniqueName: \"kubernetes.io/projected/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-kube-api-access-hw5ss\") pod \"route-controller-manager-79c944dc4f-86j98\" (UID: \"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da\") " pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.301832 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-serving-cert\") pod \"route-controller-manager-79c944dc4f-86j98\" (UID: \"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da\") " pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.301859 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-client-ca\") pod \"route-controller-manager-79c944dc4f-86j98\" (UID: \"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da\") " pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.301900 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-config\") pod \"route-controller-manager-79c944dc4f-86j98\" (UID: \"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da\") " pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.302912 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-client-ca\") pod \"route-controller-manager-79c944dc4f-86j98\" (UID: \"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da\") " pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.311691 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-config\") pod \"route-controller-manager-79c944dc4f-86j98\" (UID: \"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da\") " pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.312572 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-serving-cert\") pod \"route-controller-manager-79c944dc4f-86j98\" (UID: \"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da\") " pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.366689 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw5ss\" (UniqueName: \"kubernetes.io/projected/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-kube-api-access-hw5ss\") pod \"route-controller-manager-79c944dc4f-86j98\" (UID: \"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da\") " pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.375093 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.405442 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-config\") pod \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\" (UID: \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\") " Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.405520 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-client-ca\") pod \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\" (UID: \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\") " Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.405559 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-serving-cert\") pod \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\" (UID: \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\") " Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.405596 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-proxy-ca-bundles\") pod \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\" (UID: \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\") " Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.405629 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdvf5\" (UniqueName: \"kubernetes.io/projected/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-kube-api-access-rdvf5\") pod \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\" (UID: \"89fbbf29-4a7d-40d3-ad12-9e1111396e8d\") " Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.407655 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-client-ca" (OuterVolumeSpecName: "client-ca") pod "89fbbf29-4a7d-40d3-ad12-9e1111396e8d" (UID: "89fbbf29-4a7d-40d3-ad12-9e1111396e8d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.407772 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "89fbbf29-4a7d-40d3-ad12-9e1111396e8d" (UID: "89fbbf29-4a7d-40d3-ad12-9e1111396e8d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.415664 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-kube-api-access-rdvf5" (OuterVolumeSpecName: "kube-api-access-rdvf5") pod "89fbbf29-4a7d-40d3-ad12-9e1111396e8d" (UID: "89fbbf29-4a7d-40d3-ad12-9e1111396e8d"). InnerVolumeSpecName "kube-api-access-rdvf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.416730 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-config" (OuterVolumeSpecName: "config") pod "89fbbf29-4a7d-40d3-ad12-9e1111396e8d" (UID: "89fbbf29-4a7d-40d3-ad12-9e1111396e8d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.417382 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "89fbbf29-4a7d-40d3-ad12-9e1111396e8d" (UID: "89fbbf29-4a7d-40d3-ad12-9e1111396e8d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.422288 4981 generic.go:334] "Generic (PLEG): container finished" podID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" containerID="c02904ebbf79c49323ebdb9533eef83b081fdc3659409d21123f959212468b81" exitCode=0 Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.422325 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rhmlr" event={"ID":"31a25fb4-5131-45cb-a965-eebe7bcf6a5d","Type":"ContainerDied","Data":"c02904ebbf79c49323ebdb9533eef83b081fdc3659409d21123f959212468b81"} Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.422845 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.426529 4981 generic.go:334] "Generic (PLEG): container finished" podID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" containerID="02f9a36df7feb437b761c11c80b13ef63110c395ee32cac04446b9defa8e2922" exitCode=0 Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.426590 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvh5h" event={"ID":"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3","Type":"ContainerDied","Data":"02f9a36df7feb437b761c11c80b13ef63110c395ee32cac04446b9defa8e2922"} Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.428889 4981 generic.go:334] "Generic (PLEG): container finished" podID="89fbbf29-4a7d-40d3-ad12-9e1111396e8d" containerID="6c70de6feb244f9cd8ccc039ed9a2e77eb002500df9fabcd1b45d5c1baba4187" exitCode=0 Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.428918 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" event={"ID":"89fbbf29-4a7d-40d3-ad12-9e1111396e8d","Type":"ContainerDied","Data":"6c70de6feb244f9cd8ccc039ed9a2e77eb002500df9fabcd1b45d5c1baba4187"} Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.428946 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" event={"ID":"89fbbf29-4a7d-40d3-ad12-9e1111396e8d","Type":"ContainerDied","Data":"d8f2b86c6ee8c6c5f717e9689e29bbe68b5de87293db2a266267c873d0ceab26"} Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.428951 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rn6m5" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.428965 4981 scope.go:117] "RemoveContainer" containerID="6c70de6feb244f9cd8ccc039ed9a2e77eb002500df9fabcd1b45d5c1baba4187" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.431366 4981 generic.go:334] "Generic (PLEG): container finished" podID="afecaba0-c366-4a2f-a944-1a282869a955" containerID="6979c761e8a3730a041c26b778d8309d38f15a1adcdfa857211a452e93e43d93" exitCode=0 Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.431430 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6nhjk" event={"ID":"afecaba0-c366-4a2f-a944-1a282869a955","Type":"ContainerDied","Data":"6979c761e8a3730a041c26b778d8309d38f15a1adcdfa857211a452e93e43d93"} Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.431446 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6nhjk" event={"ID":"afecaba0-c366-4a2f-a944-1a282869a955","Type":"ContainerStarted","Data":"3e7713dd77c0d3d4d940d135824afd0d7643c141d8d71b4e397a28d6ff4687a9"} Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.433700 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" event={"ID":"10e31a3f-eb88-4c8b-93e7-e251f762d29e","Type":"ContainerDied","Data":"8a15199296ee77b1150176cf676274ec35958768c4c01a43d4aabf77f7f659ff"} Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.433793 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.446632 4981 scope.go:117] "RemoveContainer" containerID="6c70de6feb244f9cd8ccc039ed9a2e77eb002500df9fabcd1b45d5c1baba4187" Feb 27 18:48:02 crc kubenswrapper[4981]: E0227 18:48:02.447129 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c70de6feb244f9cd8ccc039ed9a2e77eb002500df9fabcd1b45d5c1baba4187\": container with ID starting with 6c70de6feb244f9cd8ccc039ed9a2e77eb002500df9fabcd1b45d5c1baba4187 not found: ID does not exist" containerID="6c70de6feb244f9cd8ccc039ed9a2e77eb002500df9fabcd1b45d5c1baba4187" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.447155 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c70de6feb244f9cd8ccc039ed9a2e77eb002500df9fabcd1b45d5c1baba4187"} err="failed to get container status \"6c70de6feb244f9cd8ccc039ed9a2e77eb002500df9fabcd1b45d5c1baba4187\": rpc error: code = NotFound desc = could not find container \"6c70de6feb244f9cd8ccc039ed9a2e77eb002500df9fabcd1b45d5c1baba4187\": container with ID starting with 6c70de6feb244f9cd8ccc039ed9a2e77eb002500df9fabcd1b45d5c1baba4187 not found: ID does not exist" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.447175 4981 scope.go:117] "RemoveContainer" containerID="0df29c09071a7b4da97a71a4491419a04da0e713c1a2aa146d54add6444af2f3" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.468117 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kmvhm"] Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.491743 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rn6m5"] Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.508436 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.508461 4981 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.508478 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.508486 4981 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.508496 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdvf5\" (UniqueName: \"kubernetes.io/projected/89fbbf29-4a7d-40d3-ad12-9e1111396e8d-kube-api-access-rdvf5\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.511561 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rn6m5"] Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.515411 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv"] Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.517936 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-ns4jv"] Feb 27 18:48:02 crc kubenswrapper[4981]: E0227 18:48:02.566939 4981 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89fbbf29_4a7d_40d3_ad12_9e1111396e8d.slice/crio-d8f2b86c6ee8c6c5f717e9689e29bbe68b5de87293db2a266267c873d0ceab26\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10e31a3f_eb88_4c8b_93e7_e251f762d29e.slice/crio-8a15199296ee77b1150176cf676274ec35958768c4c01a43d4aabf77f7f659ff\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10e31a3f_eb88_4c8b_93e7_e251f762d29e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89fbbf29_4a7d_40d3_ad12_9e1111396e8d.slice\": RecentStats: unable to find data in memory cache]" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.793783 4981 ???:1] "http: TLS handshake error from 192.168.126.11:57394: no serving certificate available for the kubelet" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.849638 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6w9qb"] Feb 27 18:48:02 crc kubenswrapper[4981]: W0227 18:48:02.872776 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0d12f02_fe5f_4ca7_a190_852ad6284190.slice/crio-f9ec8ac2b1c564b12a5e13c0ec4fa4657b9d1a30d7e7bd32e871e1e5fbf0d5b0 WatchSource:0}: Error finding container f9ec8ac2b1c564b12a5e13c0ec4fa4657b9d1a30d7e7bd32e871e1e5fbf0d5b0: Status 404 returned error can't find the container with id f9ec8ac2b1c564b12a5e13c0ec4fa4657b9d1a30d7e7bd32e871e1e5fbf0d5b0 Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.928441 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98"] Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.935112 4981 patch_prober.go:28] interesting pod/router-default-5444994796-r858c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 18:48:02 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Feb 27 18:48:02 crc kubenswrapper[4981]: [+]process-running ok Feb 27 18:48:02 crc kubenswrapper[4981]: healthz check failed Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.935159 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-r858c" podUID="2d1ccfd2-99ab-4caf-82d3-6b58656de39f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.936515 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rmtpf"] Feb 27 18:48:02 crc kubenswrapper[4981]: E0227 18:48:02.936724 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fbbf29-4a7d-40d3-ad12-9e1111396e8d" containerName="controller-manager" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.936741 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fbbf29-4a7d-40d3-ad12-9e1111396e8d" containerName="controller-manager" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.936841 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="89fbbf29-4a7d-40d3-ad12-9e1111396e8d" containerName="controller-manager" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.937574 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmtpf" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.944300 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rmtpf"] Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.944767 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.977556 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.983846 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.993808 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.993871 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 27 18:48:02 crc kubenswrapper[4981]: I0227 18:48:02.994015 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.117130 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkndf\" (UniqueName: \"kubernetes.io/projected/fbc8a428-3dab-402e-a105-0576aa196dcc-kube-api-access-rkndf\") pod \"redhat-operators-rmtpf\" (UID: \"fbc8a428-3dab-402e-a105-0576aa196dcc\") " pod="openshift-marketplace/redhat-operators-rmtpf" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.117247 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/614b8fd4-9d4f-45c1-a083-04597535ab5a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"614b8fd4-9d4f-45c1-a083-04597535ab5a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.117271 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc8a428-3dab-402e-a105-0576aa196dcc-catalog-content\") pod \"redhat-operators-rmtpf\" (UID: \"fbc8a428-3dab-402e-a105-0576aa196dcc\") " pod="openshift-marketplace/redhat-operators-rmtpf" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.117288 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/614b8fd4-9d4f-45c1-a083-04597535ab5a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"614b8fd4-9d4f-45c1-a083-04597535ab5a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.117324 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc8a428-3dab-402e-a105-0576aa196dcc-utilities\") pod \"redhat-operators-rmtpf\" (UID: \"fbc8a428-3dab-402e-a105-0576aa196dcc\") " pod="openshift-marketplace/redhat-operators-rmtpf" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.219007 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/614b8fd4-9d4f-45c1-a083-04597535ab5a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"614b8fd4-9d4f-45c1-a083-04597535ab5a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.219079 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/614b8fd4-9d4f-45c1-a083-04597535ab5a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"614b8fd4-9d4f-45c1-a083-04597535ab5a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.219113 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc8a428-3dab-402e-a105-0576aa196dcc-catalog-content\") pod \"redhat-operators-rmtpf\" (UID: \"fbc8a428-3dab-402e-a105-0576aa196dcc\") " pod="openshift-marketplace/redhat-operators-rmtpf" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.219173 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc8a428-3dab-402e-a105-0576aa196dcc-utilities\") pod \"redhat-operators-rmtpf\" (UID: \"fbc8a428-3dab-402e-a105-0576aa196dcc\") " pod="openshift-marketplace/redhat-operators-rmtpf" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.219175 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/614b8fd4-9d4f-45c1-a083-04597535ab5a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"614b8fd4-9d4f-45c1-a083-04597535ab5a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.219203 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkndf\" (UniqueName: \"kubernetes.io/projected/fbc8a428-3dab-402e-a105-0576aa196dcc-kube-api-access-rkndf\") pod \"redhat-operators-rmtpf\" (UID: \"fbc8a428-3dab-402e-a105-0576aa196dcc\") " pod="openshift-marketplace/redhat-operators-rmtpf" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.219740 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc8a428-3dab-402e-a105-0576aa196dcc-utilities\") pod \"redhat-operators-rmtpf\" (UID: \"fbc8a428-3dab-402e-a105-0576aa196dcc\") " pod="openshift-marketplace/redhat-operators-rmtpf" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.219829 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc8a428-3dab-402e-a105-0576aa196dcc-catalog-content\") pod \"redhat-operators-rmtpf\" (UID: \"fbc8a428-3dab-402e-a105-0576aa196dcc\") " pod="openshift-marketplace/redhat-operators-rmtpf" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.236377 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/614b8fd4-9d4f-45c1-a083-04597535ab5a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"614b8fd4-9d4f-45c1-a083-04597535ab5a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.236459 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkndf\" (UniqueName: \"kubernetes.io/projected/fbc8a428-3dab-402e-a105-0576aa196dcc-kube-api-access-rkndf\") pod \"redhat-operators-rmtpf\" (UID: \"fbc8a428-3dab-402e-a105-0576aa196dcc\") " pod="openshift-marketplace/redhat-operators-rmtpf" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.299475 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmtpf" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.326104 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.333526 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zrwz4"] Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.334472 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrwz4" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.341945 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zrwz4"] Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.446177 4981 generic.go:334] "Generic (PLEG): container finished" podID="3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed" containerID="850c8576265845d808e7d9db471f51cc667d2868e6b4a8ee0e5f5a63db7db60b" exitCode=0 Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.446237 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed","Type":"ContainerDied","Data":"850c8576265845d808e7d9db471f51cc667d2868e6b4a8ee0e5f5a63db7db60b"} Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.446260 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed","Type":"ContainerStarted","Data":"b8546f25947aba0a7c416bf06c46990ec8a05e9870ff08a6b5c67b35b7107707"} Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.451030 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" event={"ID":"a86208a8-d898-447f-ba80-f6b72f601ef0","Type":"ContainerStarted","Data":"90d027bce3e451b330eca8f7d1d2351b902d31ab8ff3729cd87c53e9b4fb0313"} Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.451071 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" event={"ID":"a86208a8-d898-447f-ba80-f6b72f601ef0","Type":"ContainerStarted","Data":"0c72032f4e55a5709092be874dfc48be5b5972c1270568275832b4361c786228"} Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.451756 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.455521 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" event={"ID":"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da","Type":"ContainerStarted","Data":"31485e88a2b89447d3d7905d650e5dc469a6fdf6858635508fd616ec58e24d9a"} Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.455552 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" event={"ID":"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da","Type":"ContainerStarted","Data":"70cefeaf768f38e007ce1ceb37be9c5039b37d24cfca8f087c9e18ef8a9a4272"} Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.456046 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.460223 4981 generic.go:334] "Generic (PLEG): container finished" podID="b0d12f02-fe5f-4ca7-a190-852ad6284190" containerID="c5f6787846da151f909ec9bf2d0b15f05fdd8c05b853ff4fb5d8d0d5c909e24c" exitCode=0 Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.462721 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6w9qb" event={"ID":"b0d12f02-fe5f-4ca7-a190-852ad6284190","Type":"ContainerDied","Data":"c5f6787846da151f909ec9bf2d0b15f05fdd8c05b853ff4fb5d8d0d5c909e24c"} Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.462750 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6w9qb" event={"ID":"b0d12f02-fe5f-4ca7-a190-852ad6284190","Type":"ContainerStarted","Data":"f9ec8ac2b1c564b12a5e13c0ec4fa4657b9d1a30d7e7bd32e871e1e5fbf0d5b0"} Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.507736 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" podStartSLOduration=1.507718734 podStartE2EDuration="1.507718734s" podCreationTimestamp="2026-02-27 18:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:48:03.482201281 +0000 UTC m=+182.960982441" watchObservedRunningTime="2026-02-27 18:48:03.507718734 +0000 UTC m=+182.986499894" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.515241 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" podStartSLOduration=119.514573292 podStartE2EDuration="1m59.514573292s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:48:03.501291539 +0000 UTC m=+182.980072699" watchObservedRunningTime="2026-02-27 18:48:03.514573292 +0000 UTC m=+182.993354452" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.525252 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c-catalog-content\") pod \"redhat-operators-zrwz4\" (UID: \"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c\") " pod="openshift-marketplace/redhat-operators-zrwz4" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.525311 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvvdw\" (UniqueName: \"kubernetes.io/projected/fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c-kube-api-access-dvvdw\") pod \"redhat-operators-zrwz4\" (UID: \"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c\") " pod="openshift-marketplace/redhat-operators-zrwz4" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.525364 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c-utilities\") pod \"redhat-operators-zrwz4\" (UID: \"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c\") " pod="openshift-marketplace/redhat-operators-zrwz4" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.572526 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.619980 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rmtpf"] Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.627556 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c-utilities\") pod \"redhat-operators-zrwz4\" (UID: \"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c\") " pod="openshift-marketplace/redhat-operators-zrwz4" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.627693 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c-catalog-content\") pod \"redhat-operators-zrwz4\" (UID: \"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c\") " pod="openshift-marketplace/redhat-operators-zrwz4" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.627754 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvvdw\" (UniqueName: \"kubernetes.io/projected/fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c-kube-api-access-dvvdw\") pod \"redhat-operators-zrwz4\" (UID: \"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c\") " pod="openshift-marketplace/redhat-operators-zrwz4" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.630018 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c-utilities\") pod \"redhat-operators-zrwz4\" (UID: \"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c\") " pod="openshift-marketplace/redhat-operators-zrwz4" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.630227 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c-catalog-content\") pod \"redhat-operators-zrwz4\" (UID: \"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c\") " pod="openshift-marketplace/redhat-operators-zrwz4" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.644410 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10e31a3f-eb88-4c8b-93e7-e251f762d29e" path="/var/lib/kubelet/pods/10e31a3f-eb88-4c8b-93e7-e251f762d29e/volumes" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.644902 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89fbbf29-4a7d-40d3-ad12-9e1111396e8d" path="/var/lib/kubelet/pods/89fbbf29-4a7d-40d3-ad12-9e1111396e8d/volumes" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.645657 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.653877 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvvdw\" (UniqueName: \"kubernetes.io/projected/fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c-kube-api-access-dvvdw\") pod \"redhat-operators-zrwz4\" (UID: \"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c\") " pod="openshift-marketplace/redhat-operators-zrwz4" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.677832 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 18:48:03 crc kubenswrapper[4981]: W0227 18:48:03.692339 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod614b8fd4_9d4f_45c1_a083_04597535ab5a.slice/crio-fb2b70c21636573e78528ab2d05fc64964dcdd470266dac65896c0d0148937ff WatchSource:0}: Error finding container fb2b70c21636573e78528ab2d05fc64964dcdd470266dac65896c0d0148937ff: Status 404 returned error can't find the container with id fb2b70c21636573e78528ab2d05fc64964dcdd470266dac65896c0d0148937ff Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.708503 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrwz4" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.817941 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54f7668998-kqf4l"] Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.818809 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.822315 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.827178 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.827259 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.827486 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.827665 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.827842 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.830672 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.838071 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54f7668998-kqf4l"] Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.932669 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-serving-cert\") pod \"controller-manager-54f7668998-kqf4l\" (UID: \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\") " pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.932965 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kknvp\" (UniqueName: \"kubernetes.io/projected/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-kube-api-access-kknvp\") pod \"controller-manager-54f7668998-kqf4l\" (UID: \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\") " pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.933009 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-client-ca\") pod \"controller-manager-54f7668998-kqf4l\" (UID: \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\") " pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.933039 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-proxy-ca-bundles\") pod \"controller-manager-54f7668998-kqf4l\" (UID: \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\") " pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.933131 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-config\") pod \"controller-manager-54f7668998-kqf4l\" (UID: \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\") " pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.935259 4981 patch_prober.go:28] interesting pod/router-default-5444994796-r858c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 18:48:03 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Feb 27 18:48:03 crc kubenswrapper[4981]: [+]process-running ok Feb 27 18:48:03 crc kubenswrapper[4981]: healthz check failed Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.935326 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-r858c" podUID="2d1ccfd2-99ab-4caf-82d3-6b58656de39f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 18:48:03 crc kubenswrapper[4981]: I0227 18:48:03.962822 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zrwz4"] Feb 27 18:48:04 crc kubenswrapper[4981]: W0227 18:48:04.013772 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb57a0d2_8b56_43c1_adbd_6d4d3bd17c3c.slice/crio-b0544498d79a13f1021a9f8cfe7c6d39d722ff0ad05da6df5a3496895d05f951 WatchSource:0}: Error finding container b0544498d79a13f1021a9f8cfe7c6d39d722ff0ad05da6df5a3496895d05f951: Status 404 returned error can't find the container with id b0544498d79a13f1021a9f8cfe7c6d39d722ff0ad05da6df5a3496895d05f951 Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.034634 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-proxy-ca-bundles\") pod \"controller-manager-54f7668998-kqf4l\" (UID: \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\") " pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.034701 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-config\") pod \"controller-manager-54f7668998-kqf4l\" (UID: \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\") " pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.034760 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-serving-cert\") pod \"controller-manager-54f7668998-kqf4l\" (UID: \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\") " pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.034780 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kknvp\" (UniqueName: \"kubernetes.io/projected/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-kube-api-access-kknvp\") pod \"controller-manager-54f7668998-kqf4l\" (UID: \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\") " pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.034817 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-client-ca\") pod \"controller-manager-54f7668998-kqf4l\" (UID: \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\") " pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.035774 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-client-ca\") pod \"controller-manager-54f7668998-kqf4l\" (UID: \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\") " pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.035847 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-proxy-ca-bundles\") pod \"controller-manager-54f7668998-kqf4l\" (UID: \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\") " pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.036078 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-config\") pod \"controller-manager-54f7668998-kqf4l\" (UID: \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\") " pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.043368 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-serving-cert\") pod \"controller-manager-54f7668998-kqf4l\" (UID: \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\") " pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.051526 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kknvp\" (UniqueName: \"kubernetes.io/projected/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-kube-api-access-kknvp\") pod \"controller-manager-54f7668998-kqf4l\" (UID: \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\") " pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.139228 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.430042 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54f7668998-kqf4l"] Feb 27 18:48:04 crc kubenswrapper[4981]: W0227 18:48:04.442517 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71f355f3_bcb9_4dec_9ab9_f3d1119d7308.slice/crio-23f765834db78ee3be33d4f6c419d69f413c8ee0342858605875bf5c547e0160 WatchSource:0}: Error finding container 23f765834db78ee3be33d4f6c419d69f413c8ee0342858605875bf5c547e0160: Status 404 returned error can't find the container with id 23f765834db78ee3be33d4f6c419d69f413c8ee0342858605875bf5c547e0160 Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.475017 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"614b8fd4-9d4f-45c1-a083-04597535ab5a","Type":"ContainerStarted","Data":"06e920b931753c91b4514364621dc06f2f764575831d31fd9e0b98983a67e3e5"} Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.475095 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"614b8fd4-9d4f-45c1-a083-04597535ab5a","Type":"ContainerStarted","Data":"fb2b70c21636573e78528ab2d05fc64964dcdd470266dac65896c0d0148937ff"} Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.477937 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" event={"ID":"71f355f3-bcb9-4dec-9ab9-f3d1119d7308","Type":"ContainerStarted","Data":"23f765834db78ee3be33d4f6c419d69f413c8ee0342858605875bf5c547e0160"} Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.480438 4981 generic.go:334] "Generic (PLEG): container finished" podID="fbc8a428-3dab-402e-a105-0576aa196dcc" containerID="83ad14d43913d2f995a1fd2af409f2e54b89ff582688cd775cab018c11970786" exitCode=0 Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.480495 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmtpf" event={"ID":"fbc8a428-3dab-402e-a105-0576aa196dcc","Type":"ContainerDied","Data":"83ad14d43913d2f995a1fd2af409f2e54b89ff582688cd775cab018c11970786"} Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.480512 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmtpf" event={"ID":"fbc8a428-3dab-402e-a105-0576aa196dcc","Type":"ContainerStarted","Data":"77229a220fc51e35c71263d5d6eebc59af27f0abed475a3fa8bacb7a6f03f1c1"} Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.487655 4981 generic.go:334] "Generic (PLEG): container finished" podID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" containerID="acc9ac95d1ffba7892b2b8dddcedf82c83e3de0ef658d8c5a4e4c6ccdbfca52d" exitCode=0 Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.488109 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.488044932 podStartE2EDuration="2.488044932s" podCreationTimestamp="2026-02-27 18:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:48:04.485200995 +0000 UTC m=+183.963982155" watchObservedRunningTime="2026-02-27 18:48:04.488044932 +0000 UTC m=+183.966826092" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.488746 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrwz4" event={"ID":"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c","Type":"ContainerDied","Data":"acc9ac95d1ffba7892b2b8dddcedf82c83e3de0ef658d8c5a4e4c6ccdbfca52d"} Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.488770 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrwz4" event={"ID":"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c","Type":"ContainerStarted","Data":"b0544498d79a13f1021a9f8cfe7c6d39d722ff0ad05da6df5a3496895d05f951"} Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.820872 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.825533 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.825571 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.829869 4981 patch_prober.go:28] interesting pod/console-f9d7485db-dllzn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.829908 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-dllzn" podUID="124fe9f8-0789-4ae2-aa50-6eb0c57f60ea" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.833671 4981 patch_prober.go:28] interesting pod/downloads-7954f5f757-t9grl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.833754 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t9grl" podUID="32d9179a-38c6-482f-95be-c94b48b83856" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.833866 4981 patch_prober.go:28] interesting pod/downloads-7954f5f757-t9grl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.833902 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t9grl" podUID="32d9179a-38c6-482f-95be-c94b48b83856" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.909333 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.920943 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-g8dqb" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.933988 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.936400 4981 patch_prober.go:28] interesting pod/router-default-5444994796-r858c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 18:48:04 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Feb 27 18:48:04 crc kubenswrapper[4981]: [+]process-running ok Feb 27 18:48:04 crc kubenswrapper[4981]: healthz check failed Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.936463 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-r858c" podUID="2d1ccfd2-99ab-4caf-82d3-6b58656de39f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.971529 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed-kube-api-access\") pod \"3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed\" (UID: \"3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed\") " Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.971598 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed-kubelet-dir\") pod \"3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed\" (UID: \"3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed\") " Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.973548 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed" (UID: "3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:48:04 crc kubenswrapper[4981]: I0227 18:48:04.985229 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed" (UID: "3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:48:05 crc kubenswrapper[4981]: I0227 18:48:05.073697 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:05 crc kubenswrapper[4981]: I0227 18:48:05.073729 4981 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:05 crc kubenswrapper[4981]: I0227 18:48:05.119666 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-7mfkz" Feb 27 18:48:05 crc kubenswrapper[4981]: I0227 18:48:05.507321 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed","Type":"ContainerDied","Data":"b8546f25947aba0a7c416bf06c46990ec8a05e9870ff08a6b5c67b35b7107707"} Feb 27 18:48:05 crc kubenswrapper[4981]: I0227 18:48:05.507357 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8546f25947aba0a7c416bf06c46990ec8a05e9870ff08a6b5c67b35b7107707" Feb 27 18:48:05 crc kubenswrapper[4981]: I0227 18:48:05.507408 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 18:48:05 crc kubenswrapper[4981]: I0227 18:48:05.516971 4981 generic.go:334] "Generic (PLEG): container finished" podID="614b8fd4-9d4f-45c1-a083-04597535ab5a" containerID="06e920b931753c91b4514364621dc06f2f764575831d31fd9e0b98983a67e3e5" exitCode=0 Feb 27 18:48:05 crc kubenswrapper[4981]: I0227 18:48:05.517177 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"614b8fd4-9d4f-45c1-a083-04597535ab5a","Type":"ContainerDied","Data":"06e920b931753c91b4514364621dc06f2f764575831d31fd9e0b98983a67e3e5"} Feb 27 18:48:05 crc kubenswrapper[4981]: I0227 18:48:05.521120 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" event={"ID":"71f355f3-bcb9-4dec-9ab9-f3d1119d7308","Type":"ContainerStarted","Data":"a0327fbb2b3c1f2c476a622ef3eb7816b602de90562c626cea606574e26f86c3"} Feb 27 18:48:05 crc kubenswrapper[4981]: I0227 18:48:05.521152 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:05 crc kubenswrapper[4981]: I0227 18:48:05.525565 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:05 crc kubenswrapper[4981]: I0227 18:48:05.529659 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" Feb 27 18:48:05 crc kubenswrapper[4981]: I0227 18:48:05.578909 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" podStartSLOduration=3.578891218 podStartE2EDuration="3.578891218s" podCreationTimestamp="2026-02-27 18:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:48:05.576709732 +0000 UTC m=+185.055490892" watchObservedRunningTime="2026-02-27 18:48:05.578891218 +0000 UTC m=+185.057672378" Feb 27 18:48:05 crc kubenswrapper[4981]: I0227 18:48:05.935562 4981 patch_prober.go:28] interesting pod/router-default-5444994796-r858c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 18:48:05 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Feb 27 18:48:05 crc kubenswrapper[4981]: [+]process-running ok Feb 27 18:48:05 crc kubenswrapper[4981]: healthz check failed Feb 27 18:48:05 crc kubenswrapper[4981]: I0227 18:48:05.935620 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-r858c" podUID="2d1ccfd2-99ab-4caf-82d3-6b58656de39f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 18:48:06 crc kubenswrapper[4981]: I0227 18:48:06.921176 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 18:48:06 crc kubenswrapper[4981]: I0227 18:48:06.938289 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:48:06 crc kubenswrapper[4981]: I0227 18:48:06.940893 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-r858c" Feb 27 18:48:07 crc kubenswrapper[4981]: I0227 18:48:07.001283 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/614b8fd4-9d4f-45c1-a083-04597535ab5a-kube-api-access\") pod \"614b8fd4-9d4f-45c1-a083-04597535ab5a\" (UID: \"614b8fd4-9d4f-45c1-a083-04597535ab5a\") " Feb 27 18:48:07 crc kubenswrapper[4981]: I0227 18:48:07.001386 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/614b8fd4-9d4f-45c1-a083-04597535ab5a-kubelet-dir\") pod \"614b8fd4-9d4f-45c1-a083-04597535ab5a\" (UID: \"614b8fd4-9d4f-45c1-a083-04597535ab5a\") " Feb 27 18:48:07 crc kubenswrapper[4981]: I0227 18:48:07.001673 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/614b8fd4-9d4f-45c1-a083-04597535ab5a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "614b8fd4-9d4f-45c1-a083-04597535ab5a" (UID: "614b8fd4-9d4f-45c1-a083-04597535ab5a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:48:07 crc kubenswrapper[4981]: I0227 18:48:07.006600 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/614b8fd4-9d4f-45c1-a083-04597535ab5a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "614b8fd4-9d4f-45c1-a083-04597535ab5a" (UID: "614b8fd4-9d4f-45c1-a083-04597535ab5a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:48:07 crc kubenswrapper[4981]: I0227 18:48:07.103518 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/614b8fd4-9d4f-45c1-a083-04597535ab5a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:07 crc kubenswrapper[4981]: I0227 18:48:07.103545 4981 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/614b8fd4-9d4f-45c1-a083-04597535ab5a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:07 crc kubenswrapper[4981]: I0227 18:48:07.307102 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs\") pod \"network-metrics-daemon-n2dzw\" (UID: \"f11688f5-7d6e-4931-88e5-31a5183eb6f3\") " pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:48:07 crc kubenswrapper[4981]: I0227 18:48:07.309865 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 27 18:48:07 crc kubenswrapper[4981]: I0227 18:48:07.321936 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f11688f5-7d6e-4931-88e5-31a5183eb6f3-metrics-certs\") pod \"network-metrics-daemon-n2dzw\" (UID: \"f11688f5-7d6e-4931-88e5-31a5183eb6f3\") " pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:48:07 crc kubenswrapper[4981]: I0227 18:48:07.411575 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 27 18:48:07 crc kubenswrapper[4981]: I0227 18:48:07.420246 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n2dzw" Feb 27 18:48:07 crc kubenswrapper[4981]: I0227 18:48:07.563430 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 18:48:07 crc kubenswrapper[4981]: I0227 18:48:07.563730 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"614b8fd4-9d4f-45c1-a083-04597535ab5a","Type":"ContainerDied","Data":"fb2b70c21636573e78528ab2d05fc64964dcdd470266dac65896c0d0148937ff"} Feb 27 18:48:07 crc kubenswrapper[4981]: I0227 18:48:07.563772 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb2b70c21636573e78528ab2d05fc64964dcdd470266dac65896c0d0148937ff" Feb 27 18:48:07 crc kubenswrapper[4981]: I0227 18:48:07.643157 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gv2d7" Feb 27 18:48:07 crc kubenswrapper[4981]: I0227 18:48:07.820651 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n2dzw"] Feb 27 18:48:07 crc kubenswrapper[4981]: I0227 18:48:07.940378 4981 ???:1] "http: TLS handshake error from 192.168.126.11:57408: no serving certificate available for the kubelet" Feb 27 18:48:08 crc kubenswrapper[4981]: I0227 18:48:08.591125 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n2dzw" event={"ID":"f11688f5-7d6e-4931-88e5-31a5183eb6f3","Type":"ContainerStarted","Data":"ee14d9c14fb5345bfd883fb719522645990e60888b2641db481860265cf9ed71"} Feb 27 18:48:09 crc kubenswrapper[4981]: I0227 18:48:09.173144 4981 ???:1] "http: TLS handshake error from 192.168.126.11:51720: no serving certificate available for the kubelet" Feb 27 18:48:09 crc kubenswrapper[4981]: I0227 18:48:09.603637 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n2dzw" event={"ID":"f11688f5-7d6e-4931-88e5-31a5183eb6f3","Type":"ContainerStarted","Data":"ce7871140ef7cda38035edc7494764795dd1107bf17919fa8ad85415ae49b20a"} Feb 27 18:48:09 crc kubenswrapper[4981]: I0227 18:48:09.611403 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-j7hs2_450816f5-eb2f-44e6-9b62-fd3f3b2fbf48/cluster-samples-operator/0.log" Feb 27 18:48:09 crc kubenswrapper[4981]: I0227 18:48:09.611455 4981 generic.go:334] "Generic (PLEG): container finished" podID="450816f5-eb2f-44e6-9b62-fd3f3b2fbf48" containerID="f5d2d2642ecc564bf9db50440d7db4d7f5c5e955c6e0ea57f53239d241d4f462" exitCode=2 Feb 27 18:48:09 crc kubenswrapper[4981]: I0227 18:48:09.611485 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7hs2" event={"ID":"450816f5-eb2f-44e6-9b62-fd3f3b2fbf48","Type":"ContainerDied","Data":"f5d2d2642ecc564bf9db50440d7db4d7f5c5e955c6e0ea57f53239d241d4f462"} Feb 27 18:48:09 crc kubenswrapper[4981]: I0227 18:48:09.612065 4981 scope.go:117] "RemoveContainer" containerID="f5d2d2642ecc564bf9db50440d7db4d7f5c5e955c6e0ea57f53239d241d4f462" Feb 27 18:48:14 crc kubenswrapper[4981]: I0227 18:48:14.831689 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:48:14 crc kubenswrapper[4981]: I0227 18:48:14.837732 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-dllzn" Feb 27 18:48:14 crc kubenswrapper[4981]: I0227 18:48:14.845321 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-t9grl" Feb 27 18:48:18 crc kubenswrapper[4981]: I0227 18:48:18.212429 4981 ???:1] "http: TLS handshake error from 192.168.126.11:51734: no serving certificate available for the kubelet" Feb 27 18:48:20 crc kubenswrapper[4981]: I0227 18:48:20.721453 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54f7668998-kqf4l"] Feb 27 18:48:20 crc kubenswrapper[4981]: I0227 18:48:20.722379 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" podUID="71f355f3-bcb9-4dec-9ab9-f3d1119d7308" containerName="controller-manager" containerID="cri-o://a0327fbb2b3c1f2c476a622ef3eb7816b602de90562c626cea606574e26f86c3" gracePeriod=30 Feb 27 18:48:20 crc kubenswrapper[4981]: I0227 18:48:20.740748 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98"] Feb 27 18:48:20 crc kubenswrapper[4981]: I0227 18:48:20.741010 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" podUID="913aa7e7-bbd3-4ba3-8e13-fb2fd74261da" containerName="route-controller-manager" containerID="cri-o://31485e88a2b89447d3d7905d650e5dc469a6fdf6858635508fd616ec58e24d9a" gracePeriod=30 Feb 27 18:48:21 crc kubenswrapper[4981]: I0227 18:48:21.684026 4981 generic.go:334] "Generic (PLEG): container finished" podID="913aa7e7-bbd3-4ba3-8e13-fb2fd74261da" containerID="31485e88a2b89447d3d7905d650e5dc469a6fdf6858635508fd616ec58e24d9a" exitCode=0 Feb 27 18:48:21 crc kubenswrapper[4981]: I0227 18:48:21.684114 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" event={"ID":"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da","Type":"ContainerDied","Data":"31485e88a2b89447d3d7905d650e5dc469a6fdf6858635508fd616ec58e24d9a"} Feb 27 18:48:21 crc kubenswrapper[4981]: I0227 18:48:21.687438 4981 generic.go:334] "Generic (PLEG): container finished" podID="71f355f3-bcb9-4dec-9ab9-f3d1119d7308" containerID="a0327fbb2b3c1f2c476a622ef3eb7816b602de90562c626cea606574e26f86c3" exitCode=0 Feb 27 18:48:21 crc kubenswrapper[4981]: I0227 18:48:21.687469 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" event={"ID":"71f355f3-bcb9-4dec-9ab9-f3d1119d7308","Type":"ContainerDied","Data":"a0327fbb2b3c1f2c476a622ef3eb7816b602de90562c626cea606574e26f86c3"} Feb 27 18:48:22 crc kubenswrapper[4981]: I0227 18:48:22.100296 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:48:22 crc kubenswrapper[4981]: I0227 18:48:22.429171 4981 patch_prober.go:28] interesting pod/route-controller-manager-79c944dc4f-86j98 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.52:8443/healthz\": dial tcp 10.217.0.52:8443: connect: connection refused" start-of-body= Feb 27 18:48:22 crc kubenswrapper[4981]: I0227 18:48:22.429269 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" podUID="913aa7e7-bbd3-4ba3-8e13-fb2fd74261da" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.52:8443/healthz\": dial tcp 10.217.0.52:8443: connect: connection refused" Feb 27 18:48:24 crc kubenswrapper[4981]: I0227 18:48:24.140187 4981 patch_prober.go:28] interesting pod/controller-manager-54f7668998-kqf4l container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Feb 27 18:48:24 crc kubenswrapper[4981]: I0227 18:48:24.140271 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" podUID="71f355f3-bcb9-4dec-9ab9-f3d1119d7308" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Feb 27 18:48:33 crc kubenswrapper[4981]: I0227 18:48:33.424820 4981 patch_prober.go:28] interesting pod/route-controller-manager-79c944dc4f-86j98 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.52:8443/healthz\": dial tcp 10.217.0.52:8443: i/o timeout" start-of-body= Feb 27 18:48:33 crc kubenswrapper[4981]: I0227 18:48:33.425450 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" podUID="913aa7e7-bbd3-4ba3-8e13-fb2fd74261da" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.52:8443/healthz\": dial tcp 10.217.0.52:8443: i/o timeout" Feb 27 18:48:35 crc kubenswrapper[4981]: I0227 18:48:35.140208 4981 patch_prober.go:28] interesting pod/controller-manager-54f7668998-kqf4l container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 18:48:35 crc kubenswrapper[4981]: I0227 18:48:35.140543 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" podUID="71f355f3-bcb9-4dec-9ab9-f3d1119d7308" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 18:48:35 crc kubenswrapper[4981]: I0227 18:48:35.636861 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fqktd" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.582786 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.584574 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.599303 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.684288 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.684378 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.684403 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.686425 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.687314 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.696638 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.710168 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.711069 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.719882 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.795263 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.802144 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.815749 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:48:36 crc kubenswrapper[4981]: E0227 18:48:36.978638 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:48:36 crc kubenswrapper[4981]: E0227 18:48:36.979151 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:48:36 crc kubenswrapper[4981]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:48:36 crc kubenswrapper[4981]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nw79t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536968-jn8tc_openshift-infra(8bff5a34-e6d7-482d-bed3-dfe5269b225a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Feb 27 18:48:36 crc kubenswrapper[4981]: > logger="UnhandledError" Feb 27 18:48:36 crc kubenswrapper[4981]: E0227 18:48:36.981136 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29536968-jn8tc" podUID="8bff5a34-e6d7-482d-bed3-dfe5269b225a" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.990910 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 18:48:36 crc kubenswrapper[4981]: E0227 18:48:36.991386 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614b8fd4-9d4f-45c1-a083-04597535ab5a" containerName="pruner" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.991444 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="614b8fd4-9d4f-45c1-a083-04597535ab5a" containerName="pruner" Feb 27 18:48:36 crc kubenswrapper[4981]: E0227 18:48:36.991468 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed" containerName="pruner" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.991484 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed" containerName="pruner" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.991668 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fc4c7c5-9e9b-4eaa-a5d0-3ba8531e9eed" containerName="pruner" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.991693 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="614b8fd4-9d4f-45c1-a083-04597535ab5a" containerName="pruner" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.993294 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 18:48:36 crc kubenswrapper[4981]: I0227 18:48:36.999631 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 27 18:48:37 crc kubenswrapper[4981]: I0227 18:48:37.000313 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 27 18:48:37 crc kubenswrapper[4981]: I0227 18:48:37.003181 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 18:48:37 crc kubenswrapper[4981]: I0227 18:48:37.090045 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4319326-d440-4e65-bb82-6da5c3031f30-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c4319326-d440-4e65-bb82-6da5c3031f30\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 18:48:37 crc kubenswrapper[4981]: I0227 18:48:37.090145 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4319326-d440-4e65-bb82-6da5c3031f30-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c4319326-d440-4e65-bb82-6da5c3031f30\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 18:48:37 crc kubenswrapper[4981]: I0227 18:48:37.191980 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4319326-d440-4e65-bb82-6da5c3031f30-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c4319326-d440-4e65-bb82-6da5c3031f30\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 18:48:37 crc kubenswrapper[4981]: I0227 18:48:37.192032 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4319326-d440-4e65-bb82-6da5c3031f30-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c4319326-d440-4e65-bb82-6da5c3031f30\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 18:48:37 crc kubenswrapper[4981]: I0227 18:48:37.192130 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4319326-d440-4e65-bb82-6da5c3031f30-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c4319326-d440-4e65-bb82-6da5c3031f30\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 18:48:37 crc kubenswrapper[4981]: I0227 18:48:37.211841 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4319326-d440-4e65-bb82-6da5c3031f30-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c4319326-d440-4e65-bb82-6da5c3031f30\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 18:48:37 crc kubenswrapper[4981]: I0227 18:48:37.328646 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 18:48:37 crc kubenswrapper[4981]: E0227 18:48:37.835108 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536968-jn8tc" podUID="8bff5a34-e6d7-482d-bed3-dfe5269b225a" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.615012 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.623341 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.651861 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-68b98f98c4-v4rq4"] Feb 27 18:48:38 crc kubenswrapper[4981]: E0227 18:48:38.652190 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913aa7e7-bbd3-4ba3-8e13-fb2fd74261da" containerName="route-controller-manager" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.652213 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="913aa7e7-bbd3-4ba3-8e13-fb2fd74261da" containerName="route-controller-manager" Feb 27 18:48:38 crc kubenswrapper[4981]: E0227 18:48:38.652229 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71f355f3-bcb9-4dec-9ab9-f3d1119d7308" containerName="controller-manager" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.652239 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="71f355f3-bcb9-4dec-9ab9-f3d1119d7308" containerName="controller-manager" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.652411 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="71f355f3-bcb9-4dec-9ab9-f3d1119d7308" containerName="controller-manager" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.652435 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="913aa7e7-bbd3-4ba3-8e13-fb2fd74261da" containerName="route-controller-manager" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.652936 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.679721 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68b98f98c4-v4rq4"] Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.721800 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-client-ca\") pod \"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da\" (UID: \"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da\") " Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.721865 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-proxy-ca-bundles\") pod \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\" (UID: \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\") " Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.721890 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-serving-cert\") pod \"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da\" (UID: \"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da\") " Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.721944 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-config\") pod \"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da\" (UID: \"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da\") " Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.721975 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw5ss\" (UniqueName: \"kubernetes.io/projected/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-kube-api-access-hw5ss\") pod \"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da\" (UID: \"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da\") " Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.722018 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-client-ca\") pod \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\" (UID: \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\") " Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.722155 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-config\") pod \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\" (UID: \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\") " Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.722197 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kknvp\" (UniqueName: \"kubernetes.io/projected/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-kube-api-access-kknvp\") pod \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\" (UID: \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\") " Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.722278 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-serving-cert\") pod \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\" (UID: \"71f355f3-bcb9-4dec-9ab9-f3d1119d7308\") " Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.723234 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-client-ca" (OuterVolumeSpecName: "client-ca") pod "913aa7e7-bbd3-4ba3-8e13-fb2fd74261da" (UID: "913aa7e7-bbd3-4ba3-8e13-fb2fd74261da"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.723444 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-config" (OuterVolumeSpecName: "config") pod "913aa7e7-bbd3-4ba3-8e13-fb2fd74261da" (UID: "913aa7e7-bbd3-4ba3-8e13-fb2fd74261da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.723975 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-config" (OuterVolumeSpecName: "config") pod "71f355f3-bcb9-4dec-9ab9-f3d1119d7308" (UID: "71f355f3-bcb9-4dec-9ab9-f3d1119d7308"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.724041 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "71f355f3-bcb9-4dec-9ab9-f3d1119d7308" (UID: "71f355f3-bcb9-4dec-9ab9-f3d1119d7308"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.724382 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-client-ca" (OuterVolumeSpecName: "client-ca") pod "71f355f3-bcb9-4dec-9ab9-f3d1119d7308" (UID: "71f355f3-bcb9-4dec-9ab9-f3d1119d7308"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.726250 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "913aa7e7-bbd3-4ba3-8e13-fb2fd74261da" (UID: "913aa7e7-bbd3-4ba3-8e13-fb2fd74261da"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.728999 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-kube-api-access-hw5ss" (OuterVolumeSpecName: "kube-api-access-hw5ss") pod "913aa7e7-bbd3-4ba3-8e13-fb2fd74261da" (UID: "913aa7e7-bbd3-4ba3-8e13-fb2fd74261da"). InnerVolumeSpecName "kube-api-access-hw5ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.729030 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "71f355f3-bcb9-4dec-9ab9-f3d1119d7308" (UID: "71f355f3-bcb9-4dec-9ab9-f3d1119d7308"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.745149 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-kube-api-access-kknvp" (OuterVolumeSpecName: "kube-api-access-kknvp") pod "71f355f3-bcb9-4dec-9ab9-f3d1119d7308" (UID: "71f355f3-bcb9-4dec-9ab9-f3d1119d7308"). InnerVolumeSpecName "kube-api-access-kknvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.825443 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a66a26e-c29b-446c-b15a-6e5f94c63635-proxy-ca-bundles\") pod \"controller-manager-68b98f98c4-v4rq4\" (UID: \"4a66a26e-c29b-446c-b15a-6e5f94c63635\") " pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.825580 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a66a26e-c29b-446c-b15a-6e5f94c63635-config\") pod \"controller-manager-68b98f98c4-v4rq4\" (UID: \"4a66a26e-c29b-446c-b15a-6e5f94c63635\") " pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.825721 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdmld\" (UniqueName: \"kubernetes.io/projected/4a66a26e-c29b-446c-b15a-6e5f94c63635-kube-api-access-qdmld\") pod \"controller-manager-68b98f98c4-v4rq4\" (UID: \"4a66a26e-c29b-446c-b15a-6e5f94c63635\") " pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.825881 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a66a26e-c29b-446c-b15a-6e5f94c63635-client-ca\") pod \"controller-manager-68b98f98c4-v4rq4\" (UID: \"4a66a26e-c29b-446c-b15a-6e5f94c63635\") " pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.825937 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a66a26e-c29b-446c-b15a-6e5f94c63635-serving-cert\") pod \"controller-manager-68b98f98c4-v4rq4\" (UID: \"4a66a26e-c29b-446c-b15a-6e5f94c63635\") " pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.826046 4981 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.826090 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.826105 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kknvp\" (UniqueName: \"kubernetes.io/projected/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-kube-api-access-kknvp\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.826119 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.826131 4981 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.826144 4981 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/71f355f3-bcb9-4dec-9ab9-f3d1119d7308-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.826155 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.826167 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.826179 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw5ss\" (UniqueName: \"kubernetes.io/projected/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da-kube-api-access-hw5ss\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.845269 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" event={"ID":"913aa7e7-bbd3-4ba3-8e13-fb2fd74261da","Type":"ContainerDied","Data":"70cefeaf768f38e007ce1ceb37be9c5039b37d24cfca8f087c9e18ef8a9a4272"} Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.845329 4981 scope.go:117] "RemoveContainer" containerID="31485e88a2b89447d3d7905d650e5dc469a6fdf6858635508fd616ec58e24d9a" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.845454 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.850494 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" event={"ID":"71f355f3-bcb9-4dec-9ab9-f3d1119d7308","Type":"ContainerDied","Data":"23f765834db78ee3be33d4f6c419d69f413c8ee0342858605875bf5c547e0160"} Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.850578 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54f7668998-kqf4l" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.891870 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54f7668998-kqf4l"] Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.902929 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-54f7668998-kqf4l"] Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.908210 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98"] Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.913524 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c944dc4f-86j98"] Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.927341 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a66a26e-c29b-446c-b15a-6e5f94c63635-client-ca\") pod \"controller-manager-68b98f98c4-v4rq4\" (UID: \"4a66a26e-c29b-446c-b15a-6e5f94c63635\") " pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.927394 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a66a26e-c29b-446c-b15a-6e5f94c63635-serving-cert\") pod \"controller-manager-68b98f98c4-v4rq4\" (UID: \"4a66a26e-c29b-446c-b15a-6e5f94c63635\") " pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.927478 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a66a26e-c29b-446c-b15a-6e5f94c63635-proxy-ca-bundles\") pod \"controller-manager-68b98f98c4-v4rq4\" (UID: \"4a66a26e-c29b-446c-b15a-6e5f94c63635\") " pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.927511 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a66a26e-c29b-446c-b15a-6e5f94c63635-config\") pod \"controller-manager-68b98f98c4-v4rq4\" (UID: \"4a66a26e-c29b-446c-b15a-6e5f94c63635\") " pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.927537 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdmld\" (UniqueName: \"kubernetes.io/projected/4a66a26e-c29b-446c-b15a-6e5f94c63635-kube-api-access-qdmld\") pod \"controller-manager-68b98f98c4-v4rq4\" (UID: \"4a66a26e-c29b-446c-b15a-6e5f94c63635\") " pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.929955 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a66a26e-c29b-446c-b15a-6e5f94c63635-proxy-ca-bundles\") pod \"controller-manager-68b98f98c4-v4rq4\" (UID: \"4a66a26e-c29b-446c-b15a-6e5f94c63635\") " pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.930160 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a66a26e-c29b-446c-b15a-6e5f94c63635-client-ca\") pod \"controller-manager-68b98f98c4-v4rq4\" (UID: \"4a66a26e-c29b-446c-b15a-6e5f94c63635\") " pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.931792 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a66a26e-c29b-446c-b15a-6e5f94c63635-config\") pod \"controller-manager-68b98f98c4-v4rq4\" (UID: \"4a66a26e-c29b-446c-b15a-6e5f94c63635\") " pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.933185 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a66a26e-c29b-446c-b15a-6e5f94c63635-serving-cert\") pod \"controller-manager-68b98f98c4-v4rq4\" (UID: \"4a66a26e-c29b-446c-b15a-6e5f94c63635\") " pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.948549 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdmld\" (UniqueName: \"kubernetes.io/projected/4a66a26e-c29b-446c-b15a-6e5f94c63635-kube-api-access-qdmld\") pod \"controller-manager-68b98f98c4-v4rq4\" (UID: \"4a66a26e-c29b-446c-b15a-6e5f94c63635\") " pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" Feb 27 18:48:38 crc kubenswrapper[4981]: I0227 18:48:38.979526 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" Feb 27 18:48:39 crc kubenswrapper[4981]: I0227 18:48:39.640903 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71f355f3-bcb9-4dec-9ab9-f3d1119d7308" path="/var/lib/kubelet/pods/71f355f3-bcb9-4dec-9ab9-f3d1119d7308/volumes" Feb 27 18:48:39 crc kubenswrapper[4981]: I0227 18:48:39.642592 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="913aa7e7-bbd3-4ba3-8e13-fb2fd74261da" path="/var/lib/kubelet/pods/913aa7e7-bbd3-4ba3-8e13-fb2fd74261da/volumes" Feb 27 18:48:40 crc kubenswrapper[4981]: I0227 18:48:40.731498 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68b98f98c4-v4rq4"] Feb 27 18:48:40 crc kubenswrapper[4981]: I0227 18:48:40.881349 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc"] Feb 27 18:48:40 crc kubenswrapper[4981]: I0227 18:48:40.882400 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" Feb 27 18:48:40 crc kubenswrapper[4981]: I0227 18:48:40.886720 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 18:48:40 crc kubenswrapper[4981]: I0227 18:48:40.886760 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 18:48:40 crc kubenswrapper[4981]: I0227 18:48:40.886823 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 18:48:40 crc kubenswrapper[4981]: I0227 18:48:40.886871 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 18:48:40 crc kubenswrapper[4981]: I0227 18:48:40.886909 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 18:48:40 crc kubenswrapper[4981]: I0227 18:48:40.887519 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 18:48:40 crc kubenswrapper[4981]: I0227 18:48:40.948414 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc"] Feb 27 18:48:40 crc kubenswrapper[4981]: I0227 18:48:40.959728 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmbp6\" (UniqueName: \"kubernetes.io/projected/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-kube-api-access-fmbp6\") pod \"route-controller-manager-7967d5845c-h45xc\" (UID: \"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4\") " pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" Feb 27 18:48:40 crc kubenswrapper[4981]: I0227 18:48:40.959808 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-config\") pod \"route-controller-manager-7967d5845c-h45xc\" (UID: \"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4\") " pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" Feb 27 18:48:40 crc kubenswrapper[4981]: I0227 18:48:40.959930 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-client-ca\") pod \"route-controller-manager-7967d5845c-h45xc\" (UID: \"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4\") " pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" Feb 27 18:48:40 crc kubenswrapper[4981]: I0227 18:48:40.959988 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-serving-cert\") pod \"route-controller-manager-7967d5845c-h45xc\" (UID: \"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4\") " pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.061172 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmbp6\" (UniqueName: \"kubernetes.io/projected/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-kube-api-access-fmbp6\") pod \"route-controller-manager-7967d5845c-h45xc\" (UID: \"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4\") " pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.061317 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-config\") pod \"route-controller-manager-7967d5845c-h45xc\" (UID: \"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4\") " pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.061411 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-client-ca\") pod \"route-controller-manager-7967d5845c-h45xc\" (UID: \"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4\") " pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.061465 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-serving-cert\") pod \"route-controller-manager-7967d5845c-h45xc\" (UID: \"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4\") " pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.064137 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-client-ca\") pod \"route-controller-manager-7967d5845c-h45xc\" (UID: \"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4\") " pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.064824 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-config\") pod \"route-controller-manager-7967d5845c-h45xc\" (UID: \"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4\") " pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.068811 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-serving-cert\") pod \"route-controller-manager-7967d5845c-h45xc\" (UID: \"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4\") " pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.090425 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmbp6\" (UniqueName: \"kubernetes.io/projected/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-kube-api-access-fmbp6\") pod \"route-controller-manager-7967d5845c-h45xc\" (UID: \"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4\") " pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.196522 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.579372 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.580225 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.607998 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.671037 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.671293 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70-kube-api-access\") pod \"installer-9-crc\" (UID: \"3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.671369 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70-var-lock\") pod \"installer-9-crc\" (UID: \"3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.773079 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.773149 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.773189 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70-kube-api-access\") pod \"installer-9-crc\" (UID: \"3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.773243 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70-var-lock\") pod \"installer-9-crc\" (UID: \"3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.773426 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70-var-lock\") pod \"installer-9-crc\" (UID: \"3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.791763 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70-kube-api-access\") pod \"installer-9-crc\" (UID: \"3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 18:48:41 crc kubenswrapper[4981]: I0227 18:48:41.929746 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 18:48:44 crc kubenswrapper[4981]: E0227 18:48:44.233553 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 18:48:44 crc kubenswrapper[4981]: E0227 18:48:44.233983 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bb2rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-fzncx_openshift-marketplace(a8d010a2-1cec-4e71-ac60-29b2e20787f4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 18:48:44 crc kubenswrapper[4981]: E0227 18:48:44.235223 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-fzncx" podUID="a8d010a2-1cec-4e71-ac60-29b2e20787f4" Feb 27 18:48:47 crc kubenswrapper[4981]: E0227 18:48:47.528871 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-fzncx" podUID="a8d010a2-1cec-4e71-ac60-29b2e20787f4" Feb 27 18:48:47 crc kubenswrapper[4981]: E0227 18:48:47.853083 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 18:48:47 crc kubenswrapper[4981]: E0227 18:48:47.853230 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dvvdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zrwz4_openshift-marketplace(fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 18:48:47 crc kubenswrapper[4981]: E0227 18:48:47.854370 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zrwz4" podUID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" Feb 27 18:48:49 crc kubenswrapper[4981]: E0227 18:48:49.172639 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zrwz4" podUID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" Feb 27 18:48:49 crc kubenswrapper[4981]: E0227 18:48:49.511841 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 18:48:49 crc kubenswrapper[4981]: E0227 18:48:49.512105 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5lmqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-m9ppw_openshift-marketplace(e3bd579c-4d5b-496d-bade-9a78e439970d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 18:48:49 crc kubenswrapper[4981]: E0227 18:48:49.513721 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-m9ppw" podUID="e3bd579c-4d5b-496d-bade-9a78e439970d" Feb 27 18:48:49 crc kubenswrapper[4981]: E0227 18:48:49.758848 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 18:48:49 crc kubenswrapper[4981]: E0227 18:48:49.758992 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4cckb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rhmlr_openshift-marketplace(31a25fb4-5131-45cb-a965-eebe7bcf6a5d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 18:48:49 crc kubenswrapper[4981]: E0227 18:48:49.760300 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rhmlr" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" Feb 27 18:48:50 crc kubenswrapper[4981]: I0227 18:48:50.249312 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:48:50 crc kubenswrapper[4981]: I0227 18:48:50.249618 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:48:51 crc kubenswrapper[4981]: E0227 18:48:51.203586 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m9ppw" podUID="e3bd579c-4d5b-496d-bade-9a78e439970d" Feb 27 18:48:51 crc kubenswrapper[4981]: E0227 18:48:51.203711 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rhmlr" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" Feb 27 18:48:51 crc kubenswrapper[4981]: I0227 18:48:51.221751 4981 scope.go:117] "RemoveContainer" containerID="a0327fbb2b3c1f2c476a622ef3eb7816b602de90562c626cea606574e26f86c3" Feb 27 18:48:51 crc kubenswrapper[4981]: E0227 18:48:51.515984 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 18:48:51 crc kubenswrapper[4981]: E0227 18:48:51.516549 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nhr5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6nhjk_openshift-marketplace(afecaba0-c366-4a2f-a944-1a282869a955): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 18:48:51 crc kubenswrapper[4981]: E0227 18:48:51.518203 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6nhjk" podUID="afecaba0-c366-4a2f-a944-1a282869a955" Feb 27 18:48:51 crc kubenswrapper[4981]: I0227 18:48:51.771303 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 18:48:51 crc kubenswrapper[4981]: I0227 18:48:51.779665 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 18:48:51 crc kubenswrapper[4981]: I0227 18:48:51.789638 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68b98f98c4-v4rq4"] Feb 27 18:48:51 crc kubenswrapper[4981]: I0227 18:48:51.797952 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc"] Feb 27 18:48:51 crc kubenswrapper[4981]: W0227 18:48:51.808190 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a66a26e_c29b_446c_b15a_6e5f94c63635.slice/crio-9492c0037adf46026027f8f90c3818340cdde172aeb04bcf0bfcd5dc4426a170 WatchSource:0}: Error finding container 9492c0037adf46026027f8f90c3818340cdde172aeb04bcf0bfcd5dc4426a170: Status 404 returned error can't find the container with id 9492c0037adf46026027f8f90c3818340cdde172aeb04bcf0bfcd5dc4426a170 Feb 27 18:48:51 crc kubenswrapper[4981]: W0227 18:48:51.814277 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b1e2c0_4ae8_42ca_a065_98c8ccdfa0a4.slice/crio-b8090e7657478ae9e0da350a8bf13289e4e48d7237e5decd0d2b025874e6ecea WatchSource:0}: Error finding container b8090e7657478ae9e0da350a8bf13289e4e48d7237e5decd0d2b025874e6ecea: Status 404 returned error can't find the container with id b8090e7657478ae9e0da350a8bf13289e4e48d7237e5decd0d2b025874e6ecea Feb 27 18:48:51 crc kubenswrapper[4981]: E0227 18:48:51.823329 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 18:48:51 crc kubenswrapper[4981]: E0227 18:48:51.823435 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-drxn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6w9qb_openshift-marketplace(b0d12f02-fe5f-4ca7-a190-852ad6284190): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 18:48:51 crc kubenswrapper[4981]: E0227 18:48:51.825201 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6w9qb" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" Feb 27 18:48:51 crc kubenswrapper[4981]: I0227 18:48:51.926782 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70","Type":"ContainerStarted","Data":"7e24288e57581eefea091ed4b430f2442c6eb04c8270242bf79530748381d307"} Feb 27 18:48:51 crc kubenswrapper[4981]: I0227 18:48:51.927713 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c4319326-d440-4e65-bb82-6da5c3031f30","Type":"ContainerStarted","Data":"588b9523c84c595b67ce4e9d3002213164870acf9482f6dfda12bc8788ddca1a"} Feb 27 18:48:51 crc kubenswrapper[4981]: I0227 18:48:51.929368 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" event={"ID":"4a66a26e-c29b-446c-b15a-6e5f94c63635","Type":"ContainerStarted","Data":"9492c0037adf46026027f8f90c3818340cdde172aeb04bcf0bfcd5dc4426a170"} Feb 27 18:48:51 crc kubenswrapper[4981]: I0227 18:48:51.930476 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"dcab3c53495d90f7e7016719dcf0204c20432a76bc8779ae8de1ed8fbe0f3906"} Feb 27 18:48:51 crc kubenswrapper[4981]: I0227 18:48:51.930497 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5b7062e088d5ce42e8686f1d1dbbabb38fb364ac6490ed1804bca19ab3ab56ff"} Feb 27 18:48:51 crc kubenswrapper[4981]: I0227 18:48:51.933037 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-j7hs2_450816f5-eb2f-44e6-9b62-fd3f3b2fbf48/cluster-samples-operator/0.log" Feb 27 18:48:51 crc kubenswrapper[4981]: I0227 18:48:51.933109 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j7hs2" event={"ID":"450816f5-eb2f-44e6-9b62-fd3f3b2fbf48","Type":"ContainerStarted","Data":"d150036698c5a43e4684915b64107556590728abc45ad0997def69bfaee1408e"} Feb 27 18:48:51 crc kubenswrapper[4981]: I0227 18:48:51.938476 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" event={"ID":"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4","Type":"ContainerStarted","Data":"b8090e7657478ae9e0da350a8bf13289e4e48d7237e5decd0d2b025874e6ecea"} Feb 27 18:48:51 crc kubenswrapper[4981]: I0227 18:48:51.939565 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8b0c615804a833eb69616478f01eb76df8c45373c7235a0f7eb5a318d82fda12"} Feb 27 18:48:51 crc kubenswrapper[4981]: I0227 18:48:51.940940 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"962a7c7688e5a902e784359368bd4a32f62377425fe69d33cf87687fe2b8e40d"} Feb 27 18:48:51 crc kubenswrapper[4981]: I0227 18:48:51.943216 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n2dzw" event={"ID":"f11688f5-7d6e-4931-88e5-31a5183eb6f3","Type":"ContainerStarted","Data":"6c3fb397db2b7c1b4984cb83bebcae5d4637dd383664b3d2947c5b020459983d"} Feb 27 18:48:51 crc kubenswrapper[4981]: E0227 18:48:51.944500 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6w9qb" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" Feb 27 18:48:51 crc kubenswrapper[4981]: I0227 18:48:51.987620 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-n2dzw" podStartSLOduration=167.987599619 podStartE2EDuration="2m47.987599619s" podCreationTimestamp="2026-02-27 18:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:48:51.986858407 +0000 UTC m=+231.465639587" watchObservedRunningTime="2026-02-27 18:48:51.987599619 +0000 UTC m=+231.466380779" Feb 27 18:48:52 crc kubenswrapper[4981]: E0227 18:48:52.052294 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 18:48:52 crc kubenswrapper[4981]: E0227 18:48:52.052478 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vg7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rvh5h_openshift-marketplace(5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 18:48:52 crc kubenswrapper[4981]: E0227 18:48:52.053682 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rvh5h" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" Feb 27 18:48:52 crc kubenswrapper[4981]: E0227 18:48:52.163485 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 18:48:52 crc kubenswrapper[4981]: E0227 18:48:52.163639 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rkndf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-rmtpf_openshift-marketplace(fbc8a428-3dab-402e-a105-0576aa196dcc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 18:48:52 crc kubenswrapper[4981]: E0227 18:48:52.164782 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-rmtpf" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" Feb 27 18:48:52 crc kubenswrapper[4981]: I0227 18:48:52.954708 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70","Type":"ContainerStarted","Data":"69939e1a4ee65d89b7c09ee744f1eeb811c41facc80450d32e38443af4845cfb"} Feb 27 18:48:52 crc kubenswrapper[4981]: I0227 18:48:52.958092 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" event={"ID":"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4","Type":"ContainerStarted","Data":"d23ad5845416195b877daf6663e39cdd880e7818c5bde1b9c6b221e7cc43728f"} Feb 27 18:48:52 crc kubenswrapper[4981]: I0227 18:48:52.958399 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" Feb 27 18:48:52 crc kubenswrapper[4981]: I0227 18:48:52.960977 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f3762acc508ccd68d77f736b4cb3881c23ee421cdc289c8367b62f44f064c265"} Feb 27 18:48:52 crc kubenswrapper[4981]: I0227 18:48:52.965555 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"76c75c74334e50fac0666e5bf1f2835a205d578e245057640cee40762a22779a"} Feb 27 18:48:52 crc kubenswrapper[4981]: I0227 18:48:52.966731 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" Feb 27 18:48:52 crc kubenswrapper[4981]: I0227 18:48:52.970403 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" event={"ID":"4a66a26e-c29b-446c-b15a-6e5f94c63635","Type":"ContainerStarted","Data":"3b0e0952a55762e35410396da399da6ab890ebfa2cef9ae6e7c89c7ef47ae34e"} Feb 27 18:48:52 crc kubenswrapper[4981]: I0227 18:48:52.970645 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" podUID="4a66a26e-c29b-446c-b15a-6e5f94c63635" containerName="controller-manager" containerID="cri-o://3b0e0952a55762e35410396da399da6ab890ebfa2cef9ae6e7c89c7ef47ae34e" gracePeriod=30 Feb 27 18:48:52 crc kubenswrapper[4981]: I0227 18:48:52.971543 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" Feb 27 18:48:52 crc kubenswrapper[4981]: I0227 18:48:52.977463 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=11.977445055 podStartE2EDuration="11.977445055s" podCreationTimestamp="2026-02-27 18:48:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:48:52.969798973 +0000 UTC m=+232.448580183" watchObservedRunningTime="2026-02-27 18:48:52.977445055 +0000 UTC m=+232.456226255" Feb 27 18:48:52 crc kubenswrapper[4981]: I0227 18:48:52.979946 4981 generic.go:334] "Generic (PLEG): container finished" podID="c4319326-d440-4e65-bb82-6da5c3031f30" containerID="5e4b4043d592144d0cab189cdb08c70c6eb11b6003463cdb1eab0348212274f0" exitCode=0 Feb 27 18:48:52 crc kubenswrapper[4981]: I0227 18:48:52.981010 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c4319326-d440-4e65-bb82-6da5c3031f30","Type":"ContainerDied","Data":"5e4b4043d592144d0cab189cdb08c70c6eb11b6003463cdb1eab0348212274f0"} Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.014693 4981 patch_prober.go:28] interesting pod/controller-manager-68b98f98c4-v4rq4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:56548->10.217.0.58:8443: read: connection reset by peer" start-of-body= Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.014758 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" podUID="4a66a26e-c29b-446c-b15a-6e5f94c63635" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:56548->10.217.0.58:8443: read: connection reset by peer" Feb 27 18:48:53 crc kubenswrapper[4981]: E0227 18:48:53.025499 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rvh5h" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.057629 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" podStartSLOduration=13.057602695 podStartE2EDuration="13.057602695s" podCreationTimestamp="2026-02-27 18:48:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:48:53.049979733 +0000 UTC m=+232.528760913" watchObservedRunningTime="2026-02-27 18:48:53.057602695 +0000 UTC m=+232.536383875" Feb 27 18:48:53 crc kubenswrapper[4981]: E0227 18:48:53.063095 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-rmtpf" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.076652 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" podStartSLOduration=33.076628331 podStartE2EDuration="33.076628331s" podCreationTimestamp="2026-02-27 18:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:48:53.075686963 +0000 UTC m=+232.554468123" watchObservedRunningTime="2026-02-27 18:48:53.076628331 +0000 UTC m=+232.555409501" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.365552 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.396433 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8"] Feb 27 18:48:53 crc kubenswrapper[4981]: E0227 18:48:53.396808 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a66a26e-c29b-446c-b15a-6e5f94c63635" containerName="controller-manager" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.396829 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a66a26e-c29b-446c-b15a-6e5f94c63635" containerName="controller-manager" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.403007 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a66a26e-c29b-446c-b15a-6e5f94c63635" containerName="controller-manager" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.407141 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.413659 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8"] Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.439144 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a66a26e-c29b-446c-b15a-6e5f94c63635-config\") pod \"4a66a26e-c29b-446c-b15a-6e5f94c63635\" (UID: \"4a66a26e-c29b-446c-b15a-6e5f94c63635\") " Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.439274 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a66a26e-c29b-446c-b15a-6e5f94c63635-client-ca\") pod \"4a66a26e-c29b-446c-b15a-6e5f94c63635\" (UID: \"4a66a26e-c29b-446c-b15a-6e5f94c63635\") " Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.439323 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdmld\" (UniqueName: \"kubernetes.io/projected/4a66a26e-c29b-446c-b15a-6e5f94c63635-kube-api-access-qdmld\") pod \"4a66a26e-c29b-446c-b15a-6e5f94c63635\" (UID: \"4a66a26e-c29b-446c-b15a-6e5f94c63635\") " Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.439347 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a66a26e-c29b-446c-b15a-6e5f94c63635-serving-cert\") pod \"4a66a26e-c29b-446c-b15a-6e5f94c63635\" (UID: \"4a66a26e-c29b-446c-b15a-6e5f94c63635\") " Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.439380 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a66a26e-c29b-446c-b15a-6e5f94c63635-proxy-ca-bundles\") pod \"4a66a26e-c29b-446c-b15a-6e5f94c63635\" (UID: \"4a66a26e-c29b-446c-b15a-6e5f94c63635\") " Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.440312 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a66a26e-c29b-446c-b15a-6e5f94c63635-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4a66a26e-c29b-446c-b15a-6e5f94c63635" (UID: "4a66a26e-c29b-446c-b15a-6e5f94c63635"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.440384 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a66a26e-c29b-446c-b15a-6e5f94c63635-config" (OuterVolumeSpecName: "config") pod "4a66a26e-c29b-446c-b15a-6e5f94c63635" (UID: "4a66a26e-c29b-446c-b15a-6e5f94c63635"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.440804 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a66a26e-c29b-446c-b15a-6e5f94c63635-client-ca" (OuterVolumeSpecName: "client-ca") pod "4a66a26e-c29b-446c-b15a-6e5f94c63635" (UID: "4a66a26e-c29b-446c-b15a-6e5f94c63635"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.445837 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a66a26e-c29b-446c-b15a-6e5f94c63635-kube-api-access-qdmld" (OuterVolumeSpecName: "kube-api-access-qdmld") pod "4a66a26e-c29b-446c-b15a-6e5f94c63635" (UID: "4a66a26e-c29b-446c-b15a-6e5f94c63635"). InnerVolumeSpecName "kube-api-access-qdmld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.455212 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a66a26e-c29b-446c-b15a-6e5f94c63635-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4a66a26e-c29b-446c-b15a-6e5f94c63635" (UID: "4a66a26e-c29b-446c-b15a-6e5f94c63635"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.540175 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrqxh\" (UniqueName: \"kubernetes.io/projected/06a91585-08ff-4e90-929a-691f252bc40c-kube-api-access-jrqxh\") pod \"controller-manager-55fdcddcdb-4vvj8\" (UID: \"06a91585-08ff-4e90-929a-691f252bc40c\") " pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.540220 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06a91585-08ff-4e90-929a-691f252bc40c-proxy-ca-bundles\") pod \"controller-manager-55fdcddcdb-4vvj8\" (UID: \"06a91585-08ff-4e90-929a-691f252bc40c\") " pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.540264 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06a91585-08ff-4e90-929a-691f252bc40c-client-ca\") pod \"controller-manager-55fdcddcdb-4vvj8\" (UID: \"06a91585-08ff-4e90-929a-691f252bc40c\") " pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.540290 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06a91585-08ff-4e90-929a-691f252bc40c-serving-cert\") pod \"controller-manager-55fdcddcdb-4vvj8\" (UID: \"06a91585-08ff-4e90-929a-691f252bc40c\") " pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.540313 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06a91585-08ff-4e90-929a-691f252bc40c-config\") pod \"controller-manager-55fdcddcdb-4vvj8\" (UID: \"06a91585-08ff-4e90-929a-691f252bc40c\") " pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.540359 4981 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a66a26e-c29b-446c-b15a-6e5f94c63635-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.540371 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdmld\" (UniqueName: \"kubernetes.io/projected/4a66a26e-c29b-446c-b15a-6e5f94c63635-kube-api-access-qdmld\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.540380 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a66a26e-c29b-446c-b15a-6e5f94c63635-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.540389 4981 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a66a26e-c29b-446c-b15a-6e5f94c63635-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.540399 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a66a26e-c29b-446c-b15a-6e5f94c63635-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.641764 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06a91585-08ff-4e90-929a-691f252bc40c-config\") pod \"controller-manager-55fdcddcdb-4vvj8\" (UID: \"06a91585-08ff-4e90-929a-691f252bc40c\") " pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.642178 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrqxh\" (UniqueName: \"kubernetes.io/projected/06a91585-08ff-4e90-929a-691f252bc40c-kube-api-access-jrqxh\") pod \"controller-manager-55fdcddcdb-4vvj8\" (UID: \"06a91585-08ff-4e90-929a-691f252bc40c\") " pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.642296 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06a91585-08ff-4e90-929a-691f252bc40c-proxy-ca-bundles\") pod \"controller-manager-55fdcddcdb-4vvj8\" (UID: \"06a91585-08ff-4e90-929a-691f252bc40c\") " pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.642419 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06a91585-08ff-4e90-929a-691f252bc40c-client-ca\") pod \"controller-manager-55fdcddcdb-4vvj8\" (UID: \"06a91585-08ff-4e90-929a-691f252bc40c\") " pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.642531 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06a91585-08ff-4e90-929a-691f252bc40c-serving-cert\") pod \"controller-manager-55fdcddcdb-4vvj8\" (UID: \"06a91585-08ff-4e90-929a-691f252bc40c\") " pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.644036 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06a91585-08ff-4e90-929a-691f252bc40c-config\") pod \"controller-manager-55fdcddcdb-4vvj8\" (UID: \"06a91585-08ff-4e90-929a-691f252bc40c\") " pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.644164 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06a91585-08ff-4e90-929a-691f252bc40c-proxy-ca-bundles\") pod \"controller-manager-55fdcddcdb-4vvj8\" (UID: \"06a91585-08ff-4e90-929a-691f252bc40c\") " pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.644663 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06a91585-08ff-4e90-929a-691f252bc40c-client-ca\") pod \"controller-manager-55fdcddcdb-4vvj8\" (UID: \"06a91585-08ff-4e90-929a-691f252bc40c\") " pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.647994 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06a91585-08ff-4e90-929a-691f252bc40c-serving-cert\") pod \"controller-manager-55fdcddcdb-4vvj8\" (UID: \"06a91585-08ff-4e90-929a-691f252bc40c\") " pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.667770 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrqxh\" (UniqueName: \"kubernetes.io/projected/06a91585-08ff-4e90-929a-691f252bc40c-kube-api-access-jrqxh\") pod \"controller-manager-55fdcddcdb-4vvj8\" (UID: \"06a91585-08ff-4e90-929a-691f252bc40c\") " pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.726574 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.797539 4981 csr.go:261] certificate signing request csr-jrqzj is approved, waiting to be issued Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.805832 4981 csr.go:257] certificate signing request csr-jrqzj is issued Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.990041 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8"] Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.994337 4981 generic.go:334] "Generic (PLEG): container finished" podID="8bff5a34-e6d7-482d-bed3-dfe5269b225a" containerID="faa5024f7d10e3c37f7183d7a6b6c9555a92f0f23d606e016f8b33db40afbe15" exitCode=0 Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.994445 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536968-jn8tc" event={"ID":"8bff5a34-e6d7-482d-bed3-dfe5269b225a","Type":"ContainerDied","Data":"faa5024f7d10e3c37f7183d7a6b6c9555a92f0f23d606e016f8b33db40afbe15"} Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.995733 4981 generic.go:334] "Generic (PLEG): container finished" podID="4a66a26e-c29b-446c-b15a-6e5f94c63635" containerID="3b0e0952a55762e35410396da399da6ab890ebfa2cef9ae6e7c89c7ef47ae34e" exitCode=0 Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.995787 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.995837 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" event={"ID":"4a66a26e-c29b-446c-b15a-6e5f94c63635","Type":"ContainerDied","Data":"3b0e0952a55762e35410396da399da6ab890ebfa2cef9ae6e7c89c7ef47ae34e"} Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.995872 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68b98f98c4-v4rq4" event={"ID":"4a66a26e-c29b-446c-b15a-6e5f94c63635","Type":"ContainerDied","Data":"9492c0037adf46026027f8f90c3818340cdde172aeb04bcf0bfcd5dc4426a170"} Feb 27 18:48:53 crc kubenswrapper[4981]: I0227 18:48:53.995892 4981 scope.go:117] "RemoveContainer" containerID="3b0e0952a55762e35410396da399da6ab890ebfa2cef9ae6e7c89c7ef47ae34e" Feb 27 18:48:54 crc kubenswrapper[4981]: I0227 18:48:54.033964 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68b98f98c4-v4rq4"] Feb 27 18:48:54 crc kubenswrapper[4981]: I0227 18:48:54.037533 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-68b98f98c4-v4rq4"] Feb 27 18:48:54 crc kubenswrapper[4981]: I0227 18:48:54.048091 4981 scope.go:117] "RemoveContainer" containerID="3b0e0952a55762e35410396da399da6ab890ebfa2cef9ae6e7c89c7ef47ae34e" Feb 27 18:48:54 crc kubenswrapper[4981]: E0227 18:48:54.048527 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b0e0952a55762e35410396da399da6ab890ebfa2cef9ae6e7c89c7ef47ae34e\": container with ID starting with 3b0e0952a55762e35410396da399da6ab890ebfa2cef9ae6e7c89c7ef47ae34e not found: ID does not exist" containerID="3b0e0952a55762e35410396da399da6ab890ebfa2cef9ae6e7c89c7ef47ae34e" Feb 27 18:48:54 crc kubenswrapper[4981]: I0227 18:48:54.048563 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b0e0952a55762e35410396da399da6ab890ebfa2cef9ae6e7c89c7ef47ae34e"} err="failed to get container status \"3b0e0952a55762e35410396da399da6ab890ebfa2cef9ae6e7c89c7ef47ae34e\": rpc error: code = NotFound desc = could not find container \"3b0e0952a55762e35410396da399da6ab890ebfa2cef9ae6e7c89c7ef47ae34e\": container with ID starting with 3b0e0952a55762e35410396da399da6ab890ebfa2cef9ae6e7c89c7ef47ae34e not found: ID does not exist" Feb 27 18:48:54 crc kubenswrapper[4981]: I0227 18:48:54.174167 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 18:48:54 crc kubenswrapper[4981]: I0227 18:48:54.249504 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4319326-d440-4e65-bb82-6da5c3031f30-kube-api-access\") pod \"c4319326-d440-4e65-bb82-6da5c3031f30\" (UID: \"c4319326-d440-4e65-bb82-6da5c3031f30\") " Feb 27 18:48:54 crc kubenswrapper[4981]: I0227 18:48:54.249807 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4319326-d440-4e65-bb82-6da5c3031f30-kubelet-dir\") pod \"c4319326-d440-4e65-bb82-6da5c3031f30\" (UID: \"c4319326-d440-4e65-bb82-6da5c3031f30\") " Feb 27 18:48:54 crc kubenswrapper[4981]: I0227 18:48:54.249955 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4319326-d440-4e65-bb82-6da5c3031f30-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c4319326-d440-4e65-bb82-6da5c3031f30" (UID: "c4319326-d440-4e65-bb82-6da5c3031f30"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:48:54 crc kubenswrapper[4981]: I0227 18:48:54.250364 4981 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4319326-d440-4e65-bb82-6da5c3031f30-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:54 crc kubenswrapper[4981]: I0227 18:48:54.255434 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4319326-d440-4e65-bb82-6da5c3031f30-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c4319326-d440-4e65-bb82-6da5c3031f30" (UID: "c4319326-d440-4e65-bb82-6da5c3031f30"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:48:54 crc kubenswrapper[4981]: I0227 18:48:54.351689 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4319326-d440-4e65-bb82-6da5c3031f30-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:54 crc kubenswrapper[4981]: I0227 18:48:54.808096 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-12 00:46:19.360903595 +0000 UTC Feb 27 18:48:54 crc kubenswrapper[4981]: I0227 18:48:54.808521 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6893h57m24.552386014s for next certificate rotation Feb 27 18:48:55 crc kubenswrapper[4981]: I0227 18:48:55.005018 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" event={"ID":"06a91585-08ff-4e90-929a-691f252bc40c","Type":"ContainerStarted","Data":"95d24b3a9647484f6cb3343829ceb12d5127042eb64cd193cc73b74757e2b973"} Feb 27 18:48:55 crc kubenswrapper[4981]: I0227 18:48:55.007776 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" event={"ID":"06a91585-08ff-4e90-929a-691f252bc40c","Type":"ContainerStarted","Data":"151595ade6ae026f1d489b71c3176261490c127050989ddba0077b699a83f986"} Feb 27 18:48:55 crc kubenswrapper[4981]: I0227 18:48:55.008137 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:48:55 crc kubenswrapper[4981]: I0227 18:48:55.010145 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 18:48:55 crc kubenswrapper[4981]: I0227 18:48:55.012690 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c4319326-d440-4e65-bb82-6da5c3031f30","Type":"ContainerDied","Data":"588b9523c84c595b67ce4e9d3002213164870acf9482f6dfda12bc8788ddca1a"} Feb 27 18:48:55 crc kubenswrapper[4981]: I0227 18:48:55.012739 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="588b9523c84c595b67ce4e9d3002213164870acf9482f6dfda12bc8788ddca1a" Feb 27 18:48:55 crc kubenswrapper[4981]: I0227 18:48:55.013999 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:48:55 crc kubenswrapper[4981]: I0227 18:48:55.034981 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" podStartSLOduration=15.034943594 podStartE2EDuration="15.034943594s" podCreationTimestamp="2026-02-27 18:48:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:48:55.021795196 +0000 UTC m=+234.500576376" watchObservedRunningTime="2026-02-27 18:48:55.034943594 +0000 UTC m=+234.513724794" Feb 27 18:48:55 crc kubenswrapper[4981]: I0227 18:48:55.280232 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536968-jn8tc" Feb 27 18:48:55 crc kubenswrapper[4981]: I0227 18:48:55.366457 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw79t\" (UniqueName: \"kubernetes.io/projected/8bff5a34-e6d7-482d-bed3-dfe5269b225a-kube-api-access-nw79t\") pod \"8bff5a34-e6d7-482d-bed3-dfe5269b225a\" (UID: \"8bff5a34-e6d7-482d-bed3-dfe5269b225a\") " Feb 27 18:48:55 crc kubenswrapper[4981]: I0227 18:48:55.372383 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bff5a34-e6d7-482d-bed3-dfe5269b225a-kube-api-access-nw79t" (OuterVolumeSpecName: "kube-api-access-nw79t") pod "8bff5a34-e6d7-482d-bed3-dfe5269b225a" (UID: "8bff5a34-e6d7-482d-bed3-dfe5269b225a"). InnerVolumeSpecName "kube-api-access-nw79t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:48:55 crc kubenswrapper[4981]: I0227 18:48:55.469362 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw79t\" (UniqueName: \"kubernetes.io/projected/8bff5a34-e6d7-482d-bed3-dfe5269b225a-kube-api-access-nw79t\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:55 crc kubenswrapper[4981]: I0227 18:48:55.640417 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a66a26e-c29b-446c-b15a-6e5f94c63635" path="/var/lib/kubelet/pods/4a66a26e-c29b-446c-b15a-6e5f94c63635/volumes" Feb 27 18:48:55 crc kubenswrapper[4981]: I0227 18:48:55.809656 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-09 20:17:06.651100924 +0000 UTC Feb 27 18:48:55 crc kubenswrapper[4981]: I0227 18:48:55.809711 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7585h28m10.841395253s for next certificate rotation Feb 27 18:48:56 crc kubenswrapper[4981]: I0227 18:48:56.022400 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536968-jn8tc" Feb 27 18:48:56 crc kubenswrapper[4981]: I0227 18:48:56.023100 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536968-jn8tc" event={"ID":"8bff5a34-e6d7-482d-bed3-dfe5269b225a","Type":"ContainerDied","Data":"b0807f845fcbdd39b44ca659de0f6e0ae98829adde27abbc803ff9140fa7747a"} Feb 27 18:48:56 crc kubenswrapper[4981]: I0227 18:48:56.023137 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0807f845fcbdd39b44ca659de0f6e0ae98829adde27abbc803ff9140fa7747a" Feb 27 18:48:56 crc kubenswrapper[4981]: I0227 18:48:56.816835 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:49:03 crc kubenswrapper[4981]: I0227 18:49:03.090441 4981 generic.go:334] "Generic (PLEG): container finished" podID="e3bd579c-4d5b-496d-bade-9a78e439970d" containerID="2b92f49cb085d48c1bffbfb2a0cdf19d6e45fa5ded19341bfc4979e34d19f42b" exitCode=0 Feb 27 18:49:03 crc kubenswrapper[4981]: I0227 18:49:03.090563 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m9ppw" event={"ID":"e3bd579c-4d5b-496d-bade-9a78e439970d","Type":"ContainerDied","Data":"2b92f49cb085d48c1bffbfb2a0cdf19d6e45fa5ded19341bfc4979e34d19f42b"} Feb 27 18:49:04 crc kubenswrapper[4981]: I0227 18:49:04.098791 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fzncx" event={"ID":"a8d010a2-1cec-4e71-ac60-29b2e20787f4","Type":"ContainerStarted","Data":"a4158cd2b02411cf681834c9aca49bf3abc94922ecbe24d05aae89ffce69799b"} Feb 27 18:49:05 crc kubenswrapper[4981]: I0227 18:49:05.105921 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m9ppw" event={"ID":"e3bd579c-4d5b-496d-bade-9a78e439970d","Type":"ContainerStarted","Data":"f11af0fe2f067bdd8546479b54f3936d45147bcd6b90d3946ed7619fbc83c4a2"} Feb 27 18:49:05 crc kubenswrapper[4981]: I0227 18:49:05.108115 4981 generic.go:334] "Generic (PLEG): container finished" podID="a8d010a2-1cec-4e71-ac60-29b2e20787f4" containerID="a4158cd2b02411cf681834c9aca49bf3abc94922ecbe24d05aae89ffce69799b" exitCode=0 Feb 27 18:49:05 crc kubenswrapper[4981]: I0227 18:49:05.108201 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fzncx" event={"ID":"a8d010a2-1cec-4e71-ac60-29b2e20787f4","Type":"ContainerDied","Data":"a4158cd2b02411cf681834c9aca49bf3abc94922ecbe24d05aae89ffce69799b"} Feb 27 18:49:05 crc kubenswrapper[4981]: I0227 18:49:05.111006 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6w9qb" event={"ID":"b0d12f02-fe5f-4ca7-a190-852ad6284190","Type":"ContainerStarted","Data":"0be60c7a1334a61631037ba538dabf35bce47213866ae2630c470804a5160668"} Feb 27 18:49:05 crc kubenswrapper[4981]: I0227 18:49:05.120417 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m9ppw" podStartSLOduration=2.961451141 podStartE2EDuration="1m6.120401309s" podCreationTimestamp="2026-02-27 18:47:59 +0000 UTC" firstStartedPulling="2026-02-27 18:48:01.415420889 +0000 UTC m=+180.894202049" lastFinishedPulling="2026-02-27 18:49:04.574371027 +0000 UTC m=+244.053152217" observedRunningTime="2026-02-27 18:49:05.118858592 +0000 UTC m=+244.597639742" watchObservedRunningTime="2026-02-27 18:49:05.120401309 +0000 UTC m=+244.599182469" Feb 27 18:49:06 crc kubenswrapper[4981]: I0227 18:49:06.117365 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fzncx" event={"ID":"a8d010a2-1cec-4e71-ac60-29b2e20787f4","Type":"ContainerStarted","Data":"5f610ebf3095dd86017eb6aba0ff01b369c00986eb1dc58f0b78e391f5b9edf8"} Feb 27 18:49:06 crc kubenswrapper[4981]: I0227 18:49:06.119138 4981 generic.go:334] "Generic (PLEG): container finished" podID="b0d12f02-fe5f-4ca7-a190-852ad6284190" containerID="0be60c7a1334a61631037ba538dabf35bce47213866ae2630c470804a5160668" exitCode=0 Feb 27 18:49:06 crc kubenswrapper[4981]: I0227 18:49:06.119195 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6w9qb" event={"ID":"b0d12f02-fe5f-4ca7-a190-852ad6284190","Type":"ContainerDied","Data":"0be60c7a1334a61631037ba538dabf35bce47213866ae2630c470804a5160668"} Feb 27 18:49:06 crc kubenswrapper[4981]: I0227 18:49:06.120672 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrwz4" event={"ID":"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c","Type":"ContainerStarted","Data":"62d689abca5da9d5e754132cc0a3392e4a6ee7a2ef87ac714a6c35639a23e231"} Feb 27 18:49:06 crc kubenswrapper[4981]: I0227 18:49:06.142141 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fzncx" podStartSLOduration=2.9882357539999997 podStartE2EDuration="1m7.142120551s" podCreationTimestamp="2026-02-27 18:47:59 +0000 UTC" firstStartedPulling="2026-02-27 18:48:01.41543675 +0000 UTC m=+180.894217910" lastFinishedPulling="2026-02-27 18:49:05.569321547 +0000 UTC m=+245.048102707" observedRunningTime="2026-02-27 18:49:06.139978936 +0000 UTC m=+245.618760086" watchObservedRunningTime="2026-02-27 18:49:06.142120551 +0000 UTC m=+245.620901711" Feb 27 18:49:07 crc kubenswrapper[4981]: I0227 18:49:07.137386 4981 generic.go:334] "Generic (PLEG): container finished" podID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" containerID="62d689abca5da9d5e754132cc0a3392e4a6ee7a2ef87ac714a6c35639a23e231" exitCode=0 Feb 27 18:49:07 crc kubenswrapper[4981]: I0227 18:49:07.137435 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrwz4" event={"ID":"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c","Type":"ContainerDied","Data":"62d689abca5da9d5e754132cc0a3392e4a6ee7a2ef87ac714a6c35639a23e231"} Feb 27 18:49:09 crc kubenswrapper[4981]: I0227 18:49:09.879940 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m9ppw" Feb 27 18:49:09 crc kubenswrapper[4981]: I0227 18:49:09.880012 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m9ppw" Feb 27 18:49:10 crc kubenswrapper[4981]: I0227 18:49:10.054284 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fzncx" Feb 27 18:49:10 crc kubenswrapper[4981]: I0227 18:49:10.054520 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fzncx" Feb 27 18:49:11 crc kubenswrapper[4981]: I0227 18:49:11.284067 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m9ppw" Feb 27 18:49:11 crc kubenswrapper[4981]: I0227 18:49:11.288097 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fzncx" Feb 27 18:49:11 crc kubenswrapper[4981]: I0227 18:49:11.338482 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fzncx" Feb 27 18:49:11 crc kubenswrapper[4981]: I0227 18:49:11.346554 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m9ppw" Feb 27 18:49:20 crc kubenswrapper[4981]: I0227 18:49:20.249416 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:49:20 crc kubenswrapper[4981]: I0227 18:49:20.249959 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:49:20 crc kubenswrapper[4981]: I0227 18:49:20.760791 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8"] Feb 27 18:49:20 crc kubenswrapper[4981]: I0227 18:49:20.761381 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" podUID="06a91585-08ff-4e90-929a-691f252bc40c" containerName="controller-manager" containerID="cri-o://95d24b3a9647484f6cb3343829ceb12d5127042eb64cd193cc73b74757e2b973" gracePeriod=30 Feb 27 18:49:20 crc kubenswrapper[4981]: I0227 18:49:20.820115 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc"] Feb 27 18:49:20 crc kubenswrapper[4981]: I0227 18:49:20.820516 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" podUID="24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4" containerName="route-controller-manager" containerID="cri-o://d23ad5845416195b877daf6663e39cdd880e7818c5bde1b9c6b221e7cc43728f" gracePeriod=30 Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.283544 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmtpf" event={"ID":"fbc8a428-3dab-402e-a105-0576aa196dcc","Type":"ContainerStarted","Data":"7f021a78e2685d6277851882e94f82c756da89b3f57b57016dcdbaca2ac2e671"} Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.287615 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6w9qb" event={"ID":"b0d12f02-fe5f-4ca7-a190-852ad6284190","Type":"ContainerStarted","Data":"e0c0aecf3978cf4d63033b6ed5f49c27e8b9e3d5ee2e8a145c81d8b06d336d73"} Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.292898 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrwz4" event={"ID":"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c","Type":"ContainerStarted","Data":"db1ebaf4598a782c9c5b2168db854dbe4ea8aa0bb9e16d47cac86714bc064f6f"} Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.294588 4981 generic.go:334] "Generic (PLEG): container finished" podID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" containerID="ba967cc3360c157115cac8468bad3e1b6085e834ff71572f96c74e414f02fb32" exitCode=0 Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.294693 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rhmlr" event={"ID":"31a25fb4-5131-45cb-a965-eebe7bcf6a5d","Type":"ContainerDied","Data":"ba967cc3360c157115cac8468bad3e1b6085e834ff71572f96c74e414f02fb32"} Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.297363 4981 generic.go:334] "Generic (PLEG): container finished" podID="24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4" containerID="d23ad5845416195b877daf6663e39cdd880e7818c5bde1b9c6b221e7cc43728f" exitCode=0 Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.297439 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" event={"ID":"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4","Type":"ContainerDied","Data":"d23ad5845416195b877daf6663e39cdd880e7818c5bde1b9c6b221e7cc43728f"} Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.297494 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" event={"ID":"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4","Type":"ContainerDied","Data":"b8090e7657478ae9e0da350a8bf13289e4e48d7237e5decd0d2b025874e6ecea"} Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.297508 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8090e7657478ae9e0da350a8bf13289e4e48d7237e5decd0d2b025874e6ecea" Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.298794 4981 generic.go:334] "Generic (PLEG): container finished" podID="06a91585-08ff-4e90-929a-691f252bc40c" containerID="95d24b3a9647484f6cb3343829ceb12d5127042eb64cd193cc73b74757e2b973" exitCode=0 Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.298848 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" event={"ID":"06a91585-08ff-4e90-929a-691f252bc40c","Type":"ContainerDied","Data":"95d24b3a9647484f6cb3343829ceb12d5127042eb64cd193cc73b74757e2b973"} Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.317280 4981 generic.go:334] "Generic (PLEG): container finished" podID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" containerID="3ee26685cf416571f2bd3a77b2535c8a8394c823cc487b19edec6d98f5947177" exitCode=0 Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.317434 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvh5h" event={"ID":"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3","Type":"ContainerDied","Data":"3ee26685cf416571f2bd3a77b2535c8a8394c823cc487b19edec6d98f5947177"} Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.322346 4981 generic.go:334] "Generic (PLEG): container finished" podID="afecaba0-c366-4a2f-a944-1a282869a955" containerID="b50d9947f1dfaf25a613cad6d538fd399ea7de2d5ed075cb1a1750913030f342" exitCode=0 Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.322387 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6nhjk" event={"ID":"afecaba0-c366-4a2f-a944-1a282869a955","Type":"ContainerDied","Data":"b50d9947f1dfaf25a613cad6d538fd399ea7de2d5ed075cb1a1750913030f342"} Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.339438 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.361839 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6w9qb" podStartSLOduration=3.26796393 podStartE2EDuration="1m20.36182342s" podCreationTimestamp="2026-02-27 18:48:01 +0000 UTC" firstStartedPulling="2026-02-27 18:48:03.465542036 +0000 UTC m=+182.944323196" lastFinishedPulling="2026-02-27 18:49:20.559401496 +0000 UTC m=+260.038182686" observedRunningTime="2026-02-27 18:49:21.359360655 +0000 UTC m=+260.838141805" watchObservedRunningTime="2026-02-27 18:49:21.36182342 +0000 UTC m=+260.840604580" Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.391872 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zrwz4" podStartSLOduration=2.299994834 podStartE2EDuration="1m18.391856361s" podCreationTimestamp="2026-02-27 18:48:03 +0000 UTC" firstStartedPulling="2026-02-27 18:48:04.489444293 +0000 UTC m=+183.968225453" lastFinishedPulling="2026-02-27 18:49:20.58130579 +0000 UTC m=+260.060086980" observedRunningTime="2026-02-27 18:49:21.389388796 +0000 UTC m=+260.868169966" watchObservedRunningTime="2026-02-27 18:49:21.391856361 +0000 UTC m=+260.870637521" Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.462068 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-config\") pod \"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4\" (UID: \"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4\") " Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.462133 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-serving-cert\") pod \"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4\" (UID: \"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4\") " Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.462257 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-client-ca\") pod \"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4\" (UID: \"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4\") " Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.462282 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmbp6\" (UniqueName: \"kubernetes.io/projected/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-kube-api-access-fmbp6\") pod \"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4\" (UID: \"24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4\") " Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.463859 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-client-ca" (OuterVolumeSpecName: "client-ca") pod "24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4" (UID: "24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.464209 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-config" (OuterVolumeSpecName: "config") pod "24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4" (UID: "24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.467717 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-kube-api-access-fmbp6" (OuterVolumeSpecName: "kube-api-access-fmbp6") pod "24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4" (UID: "24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4"). InnerVolumeSpecName "kube-api-access-fmbp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.468230 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4" (UID: "24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.564743 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmbp6\" (UniqueName: \"kubernetes.io/projected/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-kube-api-access-fmbp6\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.564788 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.564805 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.564820 4981 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.845711 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.969736 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06a91585-08ff-4e90-929a-691f252bc40c-client-ca\") pod \"06a91585-08ff-4e90-929a-691f252bc40c\" (UID: \"06a91585-08ff-4e90-929a-691f252bc40c\") " Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.969833 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06a91585-08ff-4e90-929a-691f252bc40c-config\") pod \"06a91585-08ff-4e90-929a-691f252bc40c\" (UID: \"06a91585-08ff-4e90-929a-691f252bc40c\") " Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.969891 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06a91585-08ff-4e90-929a-691f252bc40c-proxy-ca-bundles\") pod \"06a91585-08ff-4e90-929a-691f252bc40c\" (UID: \"06a91585-08ff-4e90-929a-691f252bc40c\") " Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.969953 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrqxh\" (UniqueName: \"kubernetes.io/projected/06a91585-08ff-4e90-929a-691f252bc40c-kube-api-access-jrqxh\") pod \"06a91585-08ff-4e90-929a-691f252bc40c\" (UID: \"06a91585-08ff-4e90-929a-691f252bc40c\") " Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.969986 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06a91585-08ff-4e90-929a-691f252bc40c-serving-cert\") pod \"06a91585-08ff-4e90-929a-691f252bc40c\" (UID: \"06a91585-08ff-4e90-929a-691f252bc40c\") " Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.971724 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06a91585-08ff-4e90-929a-691f252bc40c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "06a91585-08ff-4e90-929a-691f252bc40c" (UID: "06a91585-08ff-4e90-929a-691f252bc40c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.971748 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06a91585-08ff-4e90-929a-691f252bc40c-config" (OuterVolumeSpecName: "config") pod "06a91585-08ff-4e90-929a-691f252bc40c" (UID: "06a91585-08ff-4e90-929a-691f252bc40c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.972446 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06a91585-08ff-4e90-929a-691f252bc40c-client-ca" (OuterVolumeSpecName: "client-ca") pod "06a91585-08ff-4e90-929a-691f252bc40c" (UID: "06a91585-08ff-4e90-929a-691f252bc40c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.973710 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a91585-08ff-4e90-929a-691f252bc40c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "06a91585-08ff-4e90-929a-691f252bc40c" (UID: "06a91585-08ff-4e90-929a-691f252bc40c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:49:21 crc kubenswrapper[4981]: I0227 18:49:21.979255 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06a91585-08ff-4e90-929a-691f252bc40c-kube-api-access-jrqxh" (OuterVolumeSpecName: "kube-api-access-jrqxh") pod "06a91585-08ff-4e90-929a-691f252bc40c" (UID: "06a91585-08ff-4e90-929a-691f252bc40c"). InnerVolumeSpecName "kube-api-access-jrqxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.071560 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrqxh\" (UniqueName: \"kubernetes.io/projected/06a91585-08ff-4e90-929a-691f252bc40c-kube-api-access-jrqxh\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.071600 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06a91585-08ff-4e90-929a-691f252bc40c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.071613 4981 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06a91585-08ff-4e90-929a-691f252bc40c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.071627 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06a91585-08ff-4e90-929a-691f252bc40c-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.071639 4981 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06a91585-08ff-4e90-929a-691f252bc40c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.198601 4981 patch_prober.go:28] interesting pod/route-controller-manager-7967d5845c-h45xc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.198678 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" podUID="24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.297516 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6w9qb" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.297610 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6w9qb" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.330727 4981 generic.go:334] "Generic (PLEG): container finished" podID="fbc8a428-3dab-402e-a105-0576aa196dcc" containerID="7f021a78e2685d6277851882e94f82c756da89b3f57b57016dcdbaca2ac2e671" exitCode=0 Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.330803 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmtpf" event={"ID":"fbc8a428-3dab-402e-a105-0576aa196dcc","Type":"ContainerDied","Data":"7f021a78e2685d6277851882e94f82c756da89b3f57b57016dcdbaca2ac2e671"} Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.333019 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.333118 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" event={"ID":"06a91585-08ff-4e90-929a-691f252bc40c","Type":"ContainerDied","Data":"151595ade6ae026f1d489b71c3176261490c127050989ddba0077b699a83f986"} Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.333222 4981 scope.go:117] "RemoveContainer" containerID="95d24b3a9647484f6cb3343829ceb12d5127042eb64cd193cc73b74757e2b973" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.333022 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.375844 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8"] Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.383621 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-55fdcddcdb-4vvj8"] Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.397235 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc"] Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.405974 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7967d5845c-h45xc"] Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.872046 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67"] Feb 27 18:49:22 crc kubenswrapper[4981]: E0227 18:49:22.872542 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bff5a34-e6d7-482d-bed3-dfe5269b225a" containerName="oc" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.872557 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bff5a34-e6d7-482d-bed3-dfe5269b225a" containerName="oc" Feb 27 18:49:22 crc kubenswrapper[4981]: E0227 18:49:22.872573 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4" containerName="route-controller-manager" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.872579 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4" containerName="route-controller-manager" Feb 27 18:49:22 crc kubenswrapper[4981]: E0227 18:49:22.872589 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a91585-08ff-4e90-929a-691f252bc40c" containerName="controller-manager" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.872597 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a91585-08ff-4e90-929a-691f252bc40c" containerName="controller-manager" Feb 27 18:49:22 crc kubenswrapper[4981]: E0227 18:49:22.872609 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4319326-d440-4e65-bb82-6da5c3031f30" containerName="pruner" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.872615 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4319326-d440-4e65-bb82-6da5c3031f30" containerName="pruner" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.872710 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4" containerName="route-controller-manager" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.872722 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bff5a34-e6d7-482d-bed3-dfe5269b225a" containerName="oc" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.872731 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4319326-d440-4e65-bb82-6da5c3031f30" containerName="pruner" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.872741 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="06a91585-08ff-4e90-929a-691f252bc40c" containerName="controller-manager" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.873078 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.876547 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.877350 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.877815 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.877962 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.878099 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.878238 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.950229 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67"] Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.984159 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-config\") pod \"route-controller-manager-59f4fd5997-6wh67\" (UID: \"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15\") " pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.984254 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-client-ca\") pod \"route-controller-manager-59f4fd5997-6wh67\" (UID: \"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15\") " pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.984295 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-serving-cert\") pod \"route-controller-manager-59f4fd5997-6wh67\" (UID: \"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15\") " pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" Feb 27 18:49:22 crc kubenswrapper[4981]: I0227 18:49:22.984323 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcjps\" (UniqueName: \"kubernetes.io/projected/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-kube-api-access-kcjps\") pod \"route-controller-manager-59f4fd5997-6wh67\" (UID: \"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15\") " pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.085864 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-config\") pod \"route-controller-manager-59f4fd5997-6wh67\" (UID: \"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15\") " pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.085942 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-client-ca\") pod \"route-controller-manager-59f4fd5997-6wh67\" (UID: \"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15\") " pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.085981 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-serving-cert\") pod \"route-controller-manager-59f4fd5997-6wh67\" (UID: \"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15\") " pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.086012 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcjps\" (UniqueName: \"kubernetes.io/projected/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-kube-api-access-kcjps\") pod \"route-controller-manager-59f4fd5997-6wh67\" (UID: \"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15\") " pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.087896 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-config\") pod \"route-controller-manager-59f4fd5997-6wh67\" (UID: \"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15\") " pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.088827 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-client-ca\") pod \"route-controller-manager-59f4fd5997-6wh67\" (UID: \"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15\") " pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.097622 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-serving-cert\") pod \"route-controller-manager-59f4fd5997-6wh67\" (UID: \"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15\") " pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.118903 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcjps\" (UniqueName: \"kubernetes.io/projected/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-kube-api-access-kcjps\") pod \"route-controller-manager-59f4fd5997-6wh67\" (UID: \"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15\") " pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.196159 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.343430 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvh5h" event={"ID":"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3","Type":"ContainerStarted","Data":"7c93d4dfa7fd03d89d70b79495c91604d04f8f75a09d680bcdcfd30250b36f68"} Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.345859 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rhmlr" event={"ID":"31a25fb4-5131-45cb-a965-eebe7bcf6a5d","Type":"ContainerStarted","Data":"4424a1278a85eb0c50c90f476eb0e65ac5855eadcc51c5c93d4f085a031e396d"} Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.373213 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-6w9qb" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" containerName="registry-server" probeResult="failure" output=< Feb 27 18:49:23 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Feb 27 18:49:23 crc kubenswrapper[4981]: > Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.383283 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rvh5h" podStartSLOduration=5.085657464 podStartE2EDuration="1m24.383258737s" podCreationTimestamp="2026-02-27 18:47:59 +0000 UTC" firstStartedPulling="2026-02-27 18:48:02.437261085 +0000 UTC m=+181.916042235" lastFinishedPulling="2026-02-27 18:49:21.734862338 +0000 UTC m=+261.213643508" observedRunningTime="2026-02-27 18:49:23.362171147 +0000 UTC m=+262.840952377" watchObservedRunningTime="2026-02-27 18:49:23.383258737 +0000 UTC m=+262.862039917" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.386032 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rhmlr" podStartSLOduration=4.06568022 podStartE2EDuration="1m23.386024493s" podCreationTimestamp="2026-02-27 18:48:00 +0000 UTC" firstStartedPulling="2026-02-27 18:48:02.424678214 +0000 UTC m=+181.903459374" lastFinishedPulling="2026-02-27 18:49:21.745022487 +0000 UTC m=+261.223803647" observedRunningTime="2026-02-27 18:49:23.382144764 +0000 UTC m=+262.860925934" watchObservedRunningTime="2026-02-27 18:49:23.386024493 +0000 UTC m=+262.864805663" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.641666 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06a91585-08ff-4e90-929a-691f252bc40c" path="/var/lib/kubelet/pods/06a91585-08ff-4e90-929a-691f252bc40c/volumes" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.642651 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4" path="/var/lib/kubelet/pods/24b1e2c0-4ae8-42ca-a065-98c8ccdfa0a4/volumes" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.708996 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zrwz4" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.709042 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zrwz4" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.873166 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-947f9d7f9-cqzz8"] Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.873872 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.876380 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.878286 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.878803 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.879392 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.879426 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.879483 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.887129 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.894150 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-947f9d7f9-cqzz8"] Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.997855 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5c7be0b-ae01-4409-979a-0c6df564767a-proxy-ca-bundles\") pod \"controller-manager-947f9d7f9-cqzz8\" (UID: \"b5c7be0b-ae01-4409-979a-0c6df564767a\") " pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.997920 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5c7be0b-ae01-4409-979a-0c6df564767a-serving-cert\") pod \"controller-manager-947f9d7f9-cqzz8\" (UID: \"b5c7be0b-ae01-4409-979a-0c6df564767a\") " pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.998293 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv5q6\" (UniqueName: \"kubernetes.io/projected/b5c7be0b-ae01-4409-979a-0c6df564767a-kube-api-access-rv5q6\") pod \"controller-manager-947f9d7f9-cqzz8\" (UID: \"b5c7be0b-ae01-4409-979a-0c6df564767a\") " pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.998408 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5c7be0b-ae01-4409-979a-0c6df564767a-config\") pod \"controller-manager-947f9d7f9-cqzz8\" (UID: \"b5c7be0b-ae01-4409-979a-0c6df564767a\") " pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:49:23 crc kubenswrapper[4981]: I0227 18:49:23.998490 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5c7be0b-ae01-4409-979a-0c6df564767a-client-ca\") pod \"controller-manager-947f9d7f9-cqzz8\" (UID: \"b5c7be0b-ae01-4409-979a-0c6df564767a\") " pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:49:24 crc kubenswrapper[4981]: I0227 18:49:24.100431 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5c7be0b-ae01-4409-979a-0c6df564767a-client-ca\") pod \"controller-manager-947f9d7f9-cqzz8\" (UID: \"b5c7be0b-ae01-4409-979a-0c6df564767a\") " pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:49:24 crc kubenswrapper[4981]: I0227 18:49:24.100543 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5c7be0b-ae01-4409-979a-0c6df564767a-proxy-ca-bundles\") pod \"controller-manager-947f9d7f9-cqzz8\" (UID: \"b5c7be0b-ae01-4409-979a-0c6df564767a\") " pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:49:24 crc kubenswrapper[4981]: I0227 18:49:24.100588 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5c7be0b-ae01-4409-979a-0c6df564767a-serving-cert\") pod \"controller-manager-947f9d7f9-cqzz8\" (UID: \"b5c7be0b-ae01-4409-979a-0c6df564767a\") " pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:49:24 crc kubenswrapper[4981]: I0227 18:49:24.100701 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv5q6\" (UniqueName: \"kubernetes.io/projected/b5c7be0b-ae01-4409-979a-0c6df564767a-kube-api-access-rv5q6\") pod \"controller-manager-947f9d7f9-cqzz8\" (UID: \"b5c7be0b-ae01-4409-979a-0c6df564767a\") " pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:49:24 crc kubenswrapper[4981]: I0227 18:49:24.100750 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5c7be0b-ae01-4409-979a-0c6df564767a-config\") pod \"controller-manager-947f9d7f9-cqzz8\" (UID: \"b5c7be0b-ae01-4409-979a-0c6df564767a\") " pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:49:24 crc kubenswrapper[4981]: I0227 18:49:24.102263 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5c7be0b-ae01-4409-979a-0c6df564767a-client-ca\") pod \"controller-manager-947f9d7f9-cqzz8\" (UID: \"b5c7be0b-ae01-4409-979a-0c6df564767a\") " pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:49:24 crc kubenswrapper[4981]: I0227 18:49:24.102535 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5c7be0b-ae01-4409-979a-0c6df564767a-proxy-ca-bundles\") pod \"controller-manager-947f9d7f9-cqzz8\" (UID: \"b5c7be0b-ae01-4409-979a-0c6df564767a\") " pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:49:24 crc kubenswrapper[4981]: I0227 18:49:24.103318 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5c7be0b-ae01-4409-979a-0c6df564767a-config\") pod \"controller-manager-947f9d7f9-cqzz8\" (UID: \"b5c7be0b-ae01-4409-979a-0c6df564767a\") " pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:49:24 crc kubenswrapper[4981]: I0227 18:49:24.104084 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5c7be0b-ae01-4409-979a-0c6df564767a-serving-cert\") pod \"controller-manager-947f9d7f9-cqzz8\" (UID: \"b5c7be0b-ae01-4409-979a-0c6df564767a\") " pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:49:24 crc kubenswrapper[4981]: I0227 18:49:24.133850 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv5q6\" (UniqueName: \"kubernetes.io/projected/b5c7be0b-ae01-4409-979a-0c6df564767a-kube-api-access-rv5q6\") pod \"controller-manager-947f9d7f9-cqzz8\" (UID: \"b5c7be0b-ae01-4409-979a-0c6df564767a\") " pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:49:24 crc kubenswrapper[4981]: I0227 18:49:24.189692 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:49:24 crc kubenswrapper[4981]: I0227 18:49:24.384968 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v4vk8"] Feb 27 18:49:24 crc kubenswrapper[4981]: I0227 18:49:24.739511 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-947f9d7f9-cqzz8"] Feb 27 18:49:24 crc kubenswrapper[4981]: W0227 18:49:24.743135 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5c7be0b_ae01_4409_979a_0c6df564767a.slice/crio-841d21e2e4c8851764d6df943d8720039e4301679b99e2f3cce2a12e5a81d326 WatchSource:0}: Error finding container 841d21e2e4c8851764d6df943d8720039e4301679b99e2f3cce2a12e5a81d326: Status 404 returned error can't find the container with id 841d21e2e4c8851764d6df943d8720039e4301679b99e2f3cce2a12e5a81d326 Feb 27 18:49:24 crc kubenswrapper[4981]: I0227 18:49:24.763576 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zrwz4" podUID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" containerName="registry-server" probeResult="failure" output=< Feb 27 18:49:24 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Feb 27 18:49:24 crc kubenswrapper[4981]: > Feb 27 18:49:24 crc kubenswrapper[4981]: I0227 18:49:24.940809 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67"] Feb 27 18:49:25 crc kubenswrapper[4981]: I0227 18:49:25.356862 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" event={"ID":"b5c7be0b-ae01-4409-979a-0c6df564767a","Type":"ContainerStarted","Data":"2e82427061133d64a7f1c93fb5b536e5a8aa01e8882886d59adca98427fa3012"} Feb 27 18:49:25 crc kubenswrapper[4981]: I0227 18:49:25.356917 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" event={"ID":"b5c7be0b-ae01-4409-979a-0c6df564767a","Type":"ContainerStarted","Data":"841d21e2e4c8851764d6df943d8720039e4301679b99e2f3cce2a12e5a81d326"} Feb 27 18:49:25 crc kubenswrapper[4981]: I0227 18:49:25.357307 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:49:25 crc kubenswrapper[4981]: I0227 18:49:25.359743 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6nhjk" event={"ID":"afecaba0-c366-4a2f-a944-1a282869a955","Type":"ContainerStarted","Data":"4a3bcf4a243f97dc697ec081553027686846d6aeee8302d0abf61274e95efeea"} Feb 27 18:49:25 crc kubenswrapper[4981]: I0227 18:49:25.361730 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmtpf" event={"ID":"fbc8a428-3dab-402e-a105-0576aa196dcc","Type":"ContainerStarted","Data":"513e8610a9893c11f22b17fcb728e24e054da4bdb201a345b051f58818455550"} Feb 27 18:49:25 crc kubenswrapper[4981]: I0227 18:49:25.362831 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" event={"ID":"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15","Type":"ContainerStarted","Data":"311a688b683d488621b2cb3b19759aafa4eb4f51dd6fd2b2d47c137942d21a55"} Feb 27 18:49:25 crc kubenswrapper[4981]: I0227 18:49:25.362857 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" event={"ID":"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15","Type":"ContainerStarted","Data":"9e07c414efa85cecce5896025271fee847a2e256c61f50cdbc57712862c7f21e"} Feb 27 18:49:25 crc kubenswrapper[4981]: I0227 18:49:25.363335 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" Feb 27 18:49:25 crc kubenswrapper[4981]: I0227 18:49:25.363635 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:49:25 crc kubenswrapper[4981]: I0227 18:49:25.384914 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" podStartSLOduration=5.384895601 podStartE2EDuration="5.384895601s" podCreationTimestamp="2026-02-27 18:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:49:25.383703705 +0000 UTC m=+264.862484865" watchObservedRunningTime="2026-02-27 18:49:25.384895601 +0000 UTC m=+264.863676761" Feb 27 18:49:25 crc kubenswrapper[4981]: I0227 18:49:25.400626 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rmtpf" podStartSLOduration=3.223706833 podStartE2EDuration="1m23.400607967s" podCreationTimestamp="2026-02-27 18:48:02 +0000 UTC" firstStartedPulling="2026-02-27 18:48:04.48238899 +0000 UTC m=+183.961170150" lastFinishedPulling="2026-02-27 18:49:24.659290124 +0000 UTC m=+264.138071284" observedRunningTime="2026-02-27 18:49:25.400119072 +0000 UTC m=+264.878900232" watchObservedRunningTime="2026-02-27 18:49:25.400607967 +0000 UTC m=+264.879389127" Feb 27 18:49:25 crc kubenswrapper[4981]: I0227 18:49:25.437666 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6nhjk" podStartSLOduration=3.470997778 podStartE2EDuration="1m24.437650781s" podCreationTimestamp="2026-02-27 18:48:01 +0000 UTC" firstStartedPulling="2026-02-27 18:48:03.462810843 +0000 UTC m=+182.941592003" lastFinishedPulling="2026-02-27 18:49:24.429463836 +0000 UTC m=+263.908245006" observedRunningTime="2026-02-27 18:49:25.434721241 +0000 UTC m=+264.913502401" watchObservedRunningTime="2026-02-27 18:49:25.437650781 +0000 UTC m=+264.916431941" Feb 27 18:49:25 crc kubenswrapper[4981]: I0227 18:49:25.458148 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" podStartSLOduration=5.458127163 podStartE2EDuration="5.458127163s" podCreationTimestamp="2026-02-27 18:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:49:25.45577042 +0000 UTC m=+264.934551580" watchObservedRunningTime="2026-02-27 18:49:25.458127163 +0000 UTC m=+264.936908333" Feb 27 18:49:25 crc kubenswrapper[4981]: I0227 18:49:25.498431 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" Feb 27 18:49:26 crc kubenswrapper[4981]: I0227 18:49:26.824085 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.072120 4981 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.074091 4981 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.074279 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.074292 4981 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 18:49:30 crc kubenswrapper[4981]: E0227 18:49:30.074878 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.074916 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 27 18:49:30 crc kubenswrapper[4981]: E0227 18:49:30.074936 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.074938 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb" gracePeriod=15 Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.074998 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58" gracePeriod=15 Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.074950 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 18:49:30 crc kubenswrapper[4981]: E0227 18:49:30.075164 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.075178 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 18:49:30 crc kubenswrapper[4981]: E0227 18:49:30.075195 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.075206 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 18:49:30 crc kubenswrapper[4981]: E0227 18:49:30.075216 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.075225 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 18:49:30 crc kubenswrapper[4981]: E0227 18:49:30.075234 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.075243 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.075192 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03" gracePeriod=15 Feb 27 18:49:30 crc kubenswrapper[4981]: E0227 18:49:30.075262 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.075270 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 18:49:30 crc kubenswrapper[4981]: E0227 18:49:30.075282 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.075272 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e" gracePeriod=15 Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.075291 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 18:49:30 crc kubenswrapper[4981]: E0227 18:49:30.075417 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.075433 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 18:49:30 crc kubenswrapper[4981]: E0227 18:49:30.075450 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.075462 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.075654 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.075670 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.075694 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.075713 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.075733 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.075749 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.075765 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.076151 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.076172 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.079806 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812" gracePeriod=15 Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.081284 4981 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.131678 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.192013 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.192201 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.192244 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.192312 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.192363 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.192399 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.192428 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.192474 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.285150 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rvh5h" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.285249 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rvh5h" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.293447 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.293544 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.293576 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.293615 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.293647 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.293669 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.293687 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.293738 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.293839 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.293886 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.293908 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.293929 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.293948 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.293969 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.293988 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.294009 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.325078 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rvh5h" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.397078 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.397852 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.398296 4981 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e" exitCode=2 Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.426637 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.448544 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rvh5h" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.449640 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:30 crc kubenswrapper[4981]: W0227 18:49:30.458999 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-7b86512e11993f674f58ecf870d32ea3ef82cd5cd1d8b2acc720c43bbd25309f WatchSource:0}: Error finding container 7b86512e11993f674f58ecf870d32ea3ef82cd5cd1d8b2acc720c43bbd25309f: Status 404 returned error can't find the container with id 7b86512e11993f674f58ecf870d32ea3ef82cd5cd1d8b2acc720c43bbd25309f Feb 27 18:49:30 crc kubenswrapper[4981]: E0227 18:49:30.463987 4981 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18982f0952f55ea3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:49:30.463313571 +0000 UTC m=+269.942094731,LastTimestamp:2026-02-27 18:49:30.463313571 +0000 UTC m=+269.942094731,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.776304 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rhmlr" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.776631 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rhmlr" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.821158 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rhmlr" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.821745 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:30 crc kubenswrapper[4981]: I0227 18:49:30.822303 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:31 crc kubenswrapper[4981]: E0227 18:49:31.222828 4981 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18982f0952f55ea3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:49:30.463313571 +0000 UTC m=+269.942094731,LastTimestamp:2026-02-27 18:49:30.463313571 +0000 UTC m=+269.942094731,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.408084 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.410009 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.410886 4981 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb" exitCode=0 Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.410935 4981 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03" exitCode=0 Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.410951 4981 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58" exitCode=0 Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.411038 4981 scope.go:117] "RemoveContainer" containerID="8b9c3bbf1d5308d12b2710ba1ca7e17f1af553c7f9d8211d552fc38cb47f319d" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.413044 4981 generic.go:334] "Generic (PLEG): container finished" podID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" containerID="69939e1a4ee65d89b7c09ee744f1eeb811c41facc80450d32e38443af4845cfb" exitCode=0 Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.413125 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70","Type":"ContainerDied","Data":"69939e1a4ee65d89b7c09ee744f1eeb811c41facc80450d32e38443af4845cfb"} Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.413777 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.414335 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.414813 4981 status_manager.go:851] "Failed to get status for pod" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.420277 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d0dddaf001fea64aa6323b06d57818028076019f6eee0b79662b815ed4d53ac5"} Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.420337 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7b86512e11993f674f58ecf870d32ea3ef82cd5cd1d8b2acc720c43bbd25309f"} Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.421644 4981 status_manager.go:851] "Failed to get status for pod" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.422030 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.422482 4981 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.422811 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.486719 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rhmlr" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.487412 4981 status_manager.go:851] "Failed to get status for pod" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.487971 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.488916 4981 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.489851 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.634728 4981 status_manager.go:851] "Failed to get status for pod" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.635605 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.636130 4981 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.636558 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.861578 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6nhjk" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.861734 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6nhjk" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.923655 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6nhjk" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.924402 4981 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.925132 4981 status_manager.go:851] "Failed to get status for pod" podUID="afecaba0-c366-4a2f-a944-1a282869a955" pod="openshift-marketplace/redhat-marketplace-6nhjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6nhjk\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.925747 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.926116 4981 status_manager.go:851] "Failed to get status for pod" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:31 crc kubenswrapper[4981]: I0227 18:49:31.926562 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.359767 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6w9qb" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.360618 4981 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.361277 4981 status_manager.go:851] "Failed to get status for pod" podUID="afecaba0-c366-4a2f-a944-1a282869a955" pod="openshift-marketplace/redhat-marketplace-6nhjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6nhjk\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.361782 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.362138 4981 status_manager.go:851] "Failed to get status for pod" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" pod="openshift-marketplace/redhat-marketplace-6w9qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6w9qb\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.362468 4981 status_manager.go:851] "Failed to get status for pod" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.362791 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: E0227 18:49:32.458713 4981 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: E0227 18:49:32.463475 4981 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: E0227 18:49:32.464041 4981 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: E0227 18:49:32.470653 4981 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: E0227 18:49:32.471086 4981 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.471117 4981 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 27 18:49:32 crc kubenswrapper[4981]: E0227 18:49:32.471341 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="200ms" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.474535 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6w9qb" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.475414 4981 status_manager.go:851] "Failed to get status for pod" podUID="afecaba0-c366-4a2f-a944-1a282869a955" pod="openshift-marketplace/redhat-marketplace-6nhjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6nhjk\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.478818 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.479250 4981 status_manager.go:851] "Failed to get status for pod" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" pod="openshift-marketplace/redhat-marketplace-6w9qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6w9qb\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.480228 4981 status_manager.go:851] "Failed to get status for pod" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.480753 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.480899 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.488866 4981 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.573016 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6nhjk" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.573566 4981 status_manager.go:851] "Failed to get status for pod" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.574019 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.574393 4981 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.574760 4981 status_manager.go:851] "Failed to get status for pod" podUID="afecaba0-c366-4a2f-a944-1a282869a955" pod="openshift-marketplace/redhat-marketplace-6nhjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6nhjk\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.575038 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.575366 4981 status_manager.go:851] "Failed to get status for pod" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" pod="openshift-marketplace/redhat-marketplace-6w9qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6w9qb\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: E0227 18:49:32.672972 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="400ms" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.788297 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.788942 4981 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.789553 4981 status_manager.go:851] "Failed to get status for pod" podUID="afecaba0-c366-4a2f-a944-1a282869a955" pod="openshift-marketplace/redhat-marketplace-6nhjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6nhjk\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.789955 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.790314 4981 status_manager.go:851] "Failed to get status for pod" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" pod="openshift-marketplace/redhat-marketplace-6w9qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6w9qb\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.790605 4981 status_manager.go:851] "Failed to get status for pod" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.791104 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.933740 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70-var-lock\") pod \"3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70\" (UID: \"3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70\") " Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.933827 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70-kubelet-dir\") pod \"3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70\" (UID: \"3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70\") " Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.933897 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70-kube-api-access\") pod \"3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70\" (UID: \"3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70\") " Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.934271 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70-var-lock" (OuterVolumeSpecName: "var-lock") pod "3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" (UID: "3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.934357 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" (UID: "3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:49:32 crc kubenswrapper[4981]: I0227 18:49:32.942866 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" (UID: "3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.035765 4981 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70-var-lock\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.035850 4981 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.035870 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:33 crc kubenswrapper[4981]: E0227 18:49:33.075480 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="800ms" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.248969 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.250398 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.251045 4981 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.251553 4981 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.252032 4981 status_manager.go:851] "Failed to get status for pod" podUID="afecaba0-c366-4a2f-a944-1a282869a955" pod="openshift-marketplace/redhat-marketplace-6nhjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6nhjk\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.252482 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.252923 4981 status_manager.go:851] "Failed to get status for pod" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" pod="openshift-marketplace/redhat-marketplace-6w9qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6w9qb\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.253426 4981 status_manager.go:851] "Failed to get status for pod" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.253880 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.299932 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rmtpf" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.299990 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rmtpf" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.343595 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.343672 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.343711 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.343996 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.344279 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.344317 4981 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.344555 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.384920 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rmtpf" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.385532 4981 status_manager.go:851] "Failed to get status for pod" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" pod="openshift-marketplace/redhat-operators-rmtpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rmtpf\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.385961 4981 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.386415 4981 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.386948 4981 status_manager.go:851] "Failed to get status for pod" podUID="afecaba0-c366-4a2f-a944-1a282869a955" pod="openshift-marketplace/redhat-marketplace-6nhjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6nhjk\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.387644 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.387971 4981 status_manager.go:851] "Failed to get status for pod" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" pod="openshift-marketplace/redhat-marketplace-6w9qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6w9qb\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.388467 4981 status_manager.go:851] "Failed to get status for pod" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.388960 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.445381 4981 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.445424 4981 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.511796 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.513879 4981 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812" exitCode=0 Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.514140 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.514502 4981 scope.go:117] "RemoveContainer" containerID="a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.517562 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70","Type":"ContainerDied","Data":"7e24288e57581eefea091ed4b430f2442c6eb04c8270242bf79530748381d307"} Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.517628 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e24288e57581eefea091ed4b430f2442c6eb04c8270242bf79530748381d307" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.518194 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.540972 4981 scope.go:117] "RemoveContainer" containerID="fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.551327 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.551605 4981 status_manager.go:851] "Failed to get status for pod" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" pod="openshift-marketplace/redhat-operators-rmtpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rmtpf\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.552035 4981 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.552280 4981 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.552681 4981 status_manager.go:851] "Failed to get status for pod" podUID="afecaba0-c366-4a2f-a944-1a282869a955" pod="openshift-marketplace/redhat-marketplace-6nhjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6nhjk\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.553037 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.553232 4981 status_manager.go:851] "Failed to get status for pod" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" pod="openshift-marketplace/redhat-marketplace-6w9qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6w9qb\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.553410 4981 status_manager.go:851] "Failed to get status for pod" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.553597 4981 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.553766 4981 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.553930 4981 status_manager.go:851] "Failed to get status for pod" podUID="afecaba0-c366-4a2f-a944-1a282869a955" pod="openshift-marketplace/redhat-marketplace-6nhjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6nhjk\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.554122 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.554298 4981 status_manager.go:851] "Failed to get status for pod" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" pod="openshift-marketplace/redhat-marketplace-6w9qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6w9qb\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.554465 4981 status_manager.go:851] "Failed to get status for pod" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.554631 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.554795 4981 status_manager.go:851] "Failed to get status for pod" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" pod="openshift-marketplace/redhat-operators-rmtpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rmtpf\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.562159 4981 scope.go:117] "RemoveContainer" containerID="f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.577651 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rmtpf" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.578003 4981 status_manager.go:851] "Failed to get status for pod" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" pod="openshift-marketplace/redhat-marketplace-6w9qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6w9qb\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.578335 4981 status_manager.go:851] "Failed to get status for pod" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.578678 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.578925 4981 status_manager.go:851] "Failed to get status for pod" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" pod="openshift-marketplace/redhat-operators-rmtpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rmtpf\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.579114 4981 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.579327 4981 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.579668 4981 status_manager.go:851] "Failed to get status for pod" podUID="afecaba0-c366-4a2f-a944-1a282869a955" pod="openshift-marketplace/redhat-marketplace-6nhjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6nhjk\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.579894 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.581062 4981 scope.go:117] "RemoveContainer" containerID="4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.605033 4981 scope.go:117] "RemoveContainer" containerID="9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.641689 4981 scope.go:117] "RemoveContainer" containerID="10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.648611 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.671374 4981 scope.go:117] "RemoveContainer" containerID="a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb" Feb 27 18:49:33 crc kubenswrapper[4981]: E0227 18:49:33.671753 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb\": container with ID starting with a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb not found: ID does not exist" containerID="a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.671804 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb"} err="failed to get container status \"a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb\": rpc error: code = NotFound desc = could not find container \"a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb\": container with ID starting with a7b136fb1de92dc4033968909c08b413444fa2a097e3af36d13c919b46a64cbb not found: ID does not exist" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.671838 4981 scope.go:117] "RemoveContainer" containerID="fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03" Feb 27 18:49:33 crc kubenswrapper[4981]: E0227 18:49:33.672457 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\": container with ID starting with fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03 not found: ID does not exist" containerID="fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.672527 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03"} err="failed to get container status \"fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\": rpc error: code = NotFound desc = could not find container \"fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03\": container with ID starting with fd87c0fe0f216e667bbf3d223cfac16476c48990eecc9c73bc14f4a92d0eba03 not found: ID does not exist" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.672570 4981 scope.go:117] "RemoveContainer" containerID="f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58" Feb 27 18:49:33 crc kubenswrapper[4981]: E0227 18:49:33.672880 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\": container with ID starting with f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58 not found: ID does not exist" containerID="f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.672917 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58"} err="failed to get container status \"f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\": rpc error: code = NotFound desc = could not find container \"f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58\": container with ID starting with f7c6aaa53aa52cf3cf6f7ad3e129a7ea2ff8c06b741007480dc9e1125dbabd58 not found: ID does not exist" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.672943 4981 scope.go:117] "RemoveContainer" containerID="4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e" Feb 27 18:49:33 crc kubenswrapper[4981]: E0227 18:49:33.673258 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\": container with ID starting with 4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e not found: ID does not exist" containerID="4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.673280 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e"} err="failed to get container status \"4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\": rpc error: code = NotFound desc = could not find container \"4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e\": container with ID starting with 4722336b6f290542619d0f09547c001a34d3cf1c73995c6d1aeefc37f3a81e0e not found: ID does not exist" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.673295 4981 scope.go:117] "RemoveContainer" containerID="9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812" Feb 27 18:49:33 crc kubenswrapper[4981]: E0227 18:49:33.673748 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\": container with ID starting with 9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812 not found: ID does not exist" containerID="9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.673774 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812"} err="failed to get container status \"9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\": rpc error: code = NotFound desc = could not find container \"9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812\": container with ID starting with 9ceb528a1b0dccd42398b0c4aa5db91fee34ebbd683d171147d998a1f326c812 not found: ID does not exist" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.673789 4981 scope.go:117] "RemoveContainer" containerID="10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3" Feb 27 18:49:33 crc kubenswrapper[4981]: E0227 18:49:33.674151 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\": container with ID starting with 10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3 not found: ID does not exist" containerID="10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.674193 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3"} err="failed to get container status \"10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\": rpc error: code = NotFound desc = could not find container \"10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3\": container with ID starting with 10db7672377f8e43dce032539e0c5d719cb06b0e24eb68b2ed19585b2352aba3 not found: ID does not exist" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.773257 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zrwz4" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.774247 4981 status_manager.go:851] "Failed to get status for pod" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.775651 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.776015 4981 status_manager.go:851] "Failed to get status for pod" podUID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" pod="openshift-marketplace/redhat-operators-zrwz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrwz4\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.776304 4981 status_manager.go:851] "Failed to get status for pod" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" pod="openshift-marketplace/redhat-operators-rmtpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rmtpf\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.776537 4981 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.776739 4981 status_manager.go:851] "Failed to get status for pod" podUID="afecaba0-c366-4a2f-a944-1a282869a955" pod="openshift-marketplace/redhat-marketplace-6nhjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6nhjk\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.776935 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.777340 4981 status_manager.go:851] "Failed to get status for pod" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" pod="openshift-marketplace/redhat-marketplace-6w9qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6w9qb\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.843524 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zrwz4" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.844336 4981 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.844601 4981 status_manager.go:851] "Failed to get status for pod" podUID="afecaba0-c366-4a2f-a944-1a282869a955" pod="openshift-marketplace/redhat-marketplace-6nhjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6nhjk\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.845207 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.845775 4981 status_manager.go:851] "Failed to get status for pod" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" pod="openshift-marketplace/redhat-marketplace-6w9qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6w9qb\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.846212 4981 status_manager.go:851] "Failed to get status for pod" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.846500 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.846914 4981 status_manager.go:851] "Failed to get status for pod" podUID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" pod="openshift-marketplace/redhat-operators-zrwz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrwz4\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: I0227 18:49:33.847235 4981 status_manager.go:851] "Failed to get status for pod" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" pod="openshift-marketplace/redhat-operators-rmtpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rmtpf\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:33 crc kubenswrapper[4981]: E0227 18:49:33.876839 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="1.6s" Feb 27 18:49:35 crc kubenswrapper[4981]: E0227 18:49:35.477625 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="3.2s" Feb 27 18:49:38 crc kubenswrapper[4981]: E0227 18:49:38.679002 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.38:6443: connect: connection refused" interval="6.4s" Feb 27 18:49:41 crc kubenswrapper[4981]: E0227 18:49:41.224704 4981 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.38:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18982f0952f55ea3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 18:49:30.463313571 +0000 UTC m=+269.942094731,LastTimestamp:2026-02-27 18:49:30.463313571 +0000 UTC m=+269.942094731,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 18:49:41 crc kubenswrapper[4981]: I0227 18:49:41.632297 4981 status_manager.go:851] "Failed to get status for pod" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" pod="openshift-marketplace/redhat-operators-rmtpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rmtpf\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:41 crc kubenswrapper[4981]: I0227 18:49:41.632900 4981 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:41 crc kubenswrapper[4981]: I0227 18:49:41.633387 4981 status_manager.go:851] "Failed to get status for pod" podUID="afecaba0-c366-4a2f-a944-1a282869a955" pod="openshift-marketplace/redhat-marketplace-6nhjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6nhjk\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:41 crc kubenswrapper[4981]: I0227 18:49:41.633796 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:41 crc kubenswrapper[4981]: I0227 18:49:41.634308 4981 status_manager.go:851] "Failed to get status for pod" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" pod="openshift-marketplace/redhat-marketplace-6w9qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6w9qb\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:41 crc kubenswrapper[4981]: I0227 18:49:41.635480 4981 status_manager.go:851] "Failed to get status for pod" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:41 crc kubenswrapper[4981]: I0227 18:49:41.635977 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:41 crc kubenswrapper[4981]: I0227 18:49:41.636427 4981 status_manager.go:851] "Failed to get status for pod" podUID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" pod="openshift-marketplace/redhat-operators-zrwz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrwz4\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:42 crc kubenswrapper[4981]: I0227 18:49:42.628413 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:42 crc kubenswrapper[4981]: I0227 18:49:42.629662 4981 status_manager.go:851] "Failed to get status for pod" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:42 crc kubenswrapper[4981]: I0227 18:49:42.630413 4981 status_manager.go:851] "Failed to get status for pod" podUID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" pod="openshift-marketplace/redhat-operators-zrwz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrwz4\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:42 crc kubenswrapper[4981]: I0227 18:49:42.630912 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:42 crc kubenswrapper[4981]: I0227 18:49:42.631461 4981 status_manager.go:851] "Failed to get status for pod" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" pod="openshift-marketplace/redhat-operators-rmtpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rmtpf\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:42 crc kubenswrapper[4981]: I0227 18:49:42.632030 4981 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:42 crc kubenswrapper[4981]: I0227 18:49:42.632616 4981 status_manager.go:851] "Failed to get status for pod" podUID="afecaba0-c366-4a2f-a944-1a282869a955" pod="openshift-marketplace/redhat-marketplace-6nhjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6nhjk\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:42 crc kubenswrapper[4981]: I0227 18:49:42.633247 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:42 crc kubenswrapper[4981]: I0227 18:49:42.633735 4981 status_manager.go:851] "Failed to get status for pod" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" pod="openshift-marketplace/redhat-marketplace-6w9qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6w9qb\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:42 crc kubenswrapper[4981]: I0227 18:49:42.654872 4981 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5f0410cc-d6fa-4d09-b129-480c0d96f91a" Feb 27 18:49:42 crc kubenswrapper[4981]: I0227 18:49:42.654917 4981 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5f0410cc-d6fa-4d09-b129-480c0d96f91a" Feb 27 18:49:42 crc kubenswrapper[4981]: E0227 18:49:42.655486 4981 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:42 crc kubenswrapper[4981]: I0227 18:49:42.656221 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:42 crc kubenswrapper[4981]: W0227 18:49:42.687577 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-f0063b914a2b574874962df383bd9c42328c922aebe25efd4a4ce6063e13849a WatchSource:0}: Error finding container f0063b914a2b574874962df383bd9c42328c922aebe25efd4a4ce6063e13849a: Status 404 returned error can't find the container with id f0063b914a2b574874962df383bd9c42328c922aebe25efd4a4ce6063e13849a Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.605216 4981 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="922523d52e4f190ece69a632853fcc67b97d56318eb05deb02ee185da20a34ad" exitCode=0 Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.605324 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"922523d52e4f190ece69a632853fcc67b97d56318eb05deb02ee185da20a34ad"} Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.605596 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f0063b914a2b574874962df383bd9c42328c922aebe25efd4a4ce6063e13849a"} Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.605946 4981 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5f0410cc-d6fa-4d09-b129-480c0d96f91a" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.605968 4981 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5f0410cc-d6fa-4d09-b129-480c0d96f91a" Feb 27 18:49:43 crc kubenswrapper[4981]: E0227 18:49:43.606396 4981 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.606489 4981 status_manager.go:851] "Failed to get status for pod" podUID="afecaba0-c366-4a2f-a944-1a282869a955" pod="openshift-marketplace/redhat-marketplace-6nhjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6nhjk\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.607043 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.607465 4981 status_manager.go:851] "Failed to get status for pod" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" pod="openshift-marketplace/redhat-marketplace-6w9qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6w9qb\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.607801 4981 status_manager.go:851] "Failed to get status for pod" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.608031 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.608368 4981 status_manager.go:851] "Failed to get status for pod" podUID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" pod="openshift-marketplace/redhat-operators-zrwz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrwz4\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.608605 4981 status_manager.go:851] "Failed to get status for pod" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" pod="openshift-marketplace/redhat-operators-rmtpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rmtpf\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.608875 4981 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.611165 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.613135 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.613223 4981 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="47da708cea4fc8dee9c3d6ac4bb7473cdc255bfed85666ddf72c1b49d93d94ec" exitCode=1 Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.613269 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"47da708cea4fc8dee9c3d6ac4bb7473cdc255bfed85666ddf72c1b49d93d94ec"} Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.613856 4981 scope.go:117] "RemoveContainer" containerID="47da708cea4fc8dee9c3d6ac4bb7473cdc255bfed85666ddf72c1b49d93d94ec" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.614186 4981 status_manager.go:851] "Failed to get status for pod" podUID="afecaba0-c366-4a2f-a944-1a282869a955" pod="openshift-marketplace/redhat-marketplace-6nhjk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6nhjk\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.614602 4981 status_manager.go:851] "Failed to get status for pod" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" pod="openshift-marketplace/certified-operators-rvh5h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rvh5h\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.614812 4981 status_manager.go:851] "Failed to get status for pod" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" pod="openshift-marketplace/redhat-marketplace-6w9qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6w9qb\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.615031 4981 status_manager.go:851] "Failed to get status for pod" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.615515 4981 status_manager.go:851] "Failed to get status for pod" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" pod="openshift-marketplace/community-operators-rhmlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rhmlr\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.615968 4981 status_manager.go:851] "Failed to get status for pod" podUID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" pod="openshift-marketplace/redhat-operators-zrwz4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zrwz4\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.616352 4981 status_manager.go:851] "Failed to get status for pod" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" pod="openshift-marketplace/redhat-operators-rmtpf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rmtpf\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.616788 4981 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:43 crc kubenswrapper[4981]: I0227 18:49:43.617157 4981 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.38:6443: connect: connection refused" Feb 27 18:49:44 crc kubenswrapper[4981]: I0227 18:49:44.636967 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b22209ab5bc5c102f853da85d544f0fb91e83741dd9e192fcc564124836acc8b"} Feb 27 18:49:44 crc kubenswrapper[4981]: I0227 18:49:44.637262 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7dca9a3f558815999c1c91a052b2231a737cb720025ac92aa844601307b3d43d"} Feb 27 18:49:44 crc kubenswrapper[4981]: I0227 18:49:44.637274 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"42cec03b8b931dc7da5c8858f24ed1d67622db025690ce247974f43fb246ed4c"} Feb 27 18:49:44 crc kubenswrapper[4981]: I0227 18:49:44.641987 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 18:49:44 crc kubenswrapper[4981]: I0227 18:49:44.642742 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 27 18:49:44 crc kubenswrapper[4981]: I0227 18:49:44.642828 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3ccd553e132d6e7797bc9b6ed083eac38f68efb04c8cccefbf90c8487d076728"} Feb 27 18:49:45 crc kubenswrapper[4981]: I0227 18:49:45.660998 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f4323608e1b063dc691cc7c518f9abee25926e8f238755027823153c3826edde"} Feb 27 18:49:45 crc kubenswrapper[4981]: I0227 18:49:45.662376 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b61d907875489a526dc61d76106cc0ce75d079c1f8e30ec1239e7e7f56ac16cb"} Feb 27 18:49:45 crc kubenswrapper[4981]: I0227 18:49:45.663012 4981 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5f0410cc-d6fa-4d09-b129-480c0d96f91a" Feb 27 18:49:45 crc kubenswrapper[4981]: I0227 18:49:45.663293 4981 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5f0410cc-d6fa-4d09-b129-480c0d96f91a" Feb 27 18:49:45 crc kubenswrapper[4981]: I0227 18:49:45.663840 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:47 crc kubenswrapper[4981]: I0227 18:49:47.656981 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:47 crc kubenswrapper[4981]: I0227 18:49:47.657373 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:47 crc kubenswrapper[4981]: I0227 18:49:47.665839 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:48 crc kubenswrapper[4981]: I0227 18:49:48.137845 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:49:49 crc kubenswrapper[4981]: I0227 18:49:49.411858 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" podUID="86f8ab04-83b9-497b-a8b4-cde27e61d568" containerName="oauth-openshift" containerID="cri-o://a60cec4e64bc7442546f52ffe51a9d58559eda79dadaaf65106b275d95021031" gracePeriod=15 Feb 27 18:49:49 crc kubenswrapper[4981]: I0227 18:49:49.689315 4981 generic.go:334] "Generic (PLEG): container finished" podID="86f8ab04-83b9-497b-a8b4-cde27e61d568" containerID="a60cec4e64bc7442546f52ffe51a9d58559eda79dadaaf65106b275d95021031" exitCode=0 Feb 27 18:49:49 crc kubenswrapper[4981]: I0227 18:49:49.689377 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" event={"ID":"86f8ab04-83b9-497b-a8b4-cde27e61d568","Type":"ContainerDied","Data":"a60cec4e64bc7442546f52ffe51a9d58559eda79dadaaf65106b275d95021031"} Feb 27 18:49:49 crc kubenswrapper[4981]: I0227 18:49:49.891529 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:49:49 crc kubenswrapper[4981]: I0227 18:49:49.891755 4981 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 27 18:49:49 crc kubenswrapper[4981]: I0227 18:49:49.891811 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.249274 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.249607 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.250094 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.250893 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569"} pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.250961 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" containerID="cri-o://cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569" gracePeriod=600 Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.385390 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.486248 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-service-ca\") pod \"86f8ab04-83b9-497b-a8b4-cde27e61d568\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.486612 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-serving-cert\") pod \"86f8ab04-83b9-497b-a8b4-cde27e61d568\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.486643 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-cliconfig\") pod \"86f8ab04-83b9-497b-a8b4-cde27e61d568\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.486675 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-router-certs\") pod \"86f8ab04-83b9-497b-a8b4-cde27e61d568\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.486701 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-ocp-branding-template\") pod \"86f8ab04-83b9-497b-a8b4-cde27e61d568\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.486726 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-audit-policies\") pod \"86f8ab04-83b9-497b-a8b4-cde27e61d568\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.486779 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-session\") pod \"86f8ab04-83b9-497b-a8b4-cde27e61d568\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.486813 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-template-error\") pod \"86f8ab04-83b9-497b-a8b4-cde27e61d568\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.486838 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86f8ab04-83b9-497b-a8b4-cde27e61d568-audit-dir\") pod \"86f8ab04-83b9-497b-a8b4-cde27e61d568\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.486876 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-template-login\") pod \"86f8ab04-83b9-497b-a8b4-cde27e61d568\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.486900 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-template-provider-selection\") pod \"86f8ab04-83b9-497b-a8b4-cde27e61d568\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.486924 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-trusted-ca-bundle\") pod \"86f8ab04-83b9-497b-a8b4-cde27e61d568\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.486949 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf5mb\" (UniqueName: \"kubernetes.io/projected/86f8ab04-83b9-497b-a8b4-cde27e61d568-kube-api-access-nf5mb\") pod \"86f8ab04-83b9-497b-a8b4-cde27e61d568\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.486973 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-idp-0-file-data\") pod \"86f8ab04-83b9-497b-a8b4-cde27e61d568\" (UID: \"86f8ab04-83b9-497b-a8b4-cde27e61d568\") " Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.487516 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "86f8ab04-83b9-497b-a8b4-cde27e61d568" (UID: "86f8ab04-83b9-497b-a8b4-cde27e61d568"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.487608 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "86f8ab04-83b9-497b-a8b4-cde27e61d568" (UID: "86f8ab04-83b9-497b-a8b4-cde27e61d568"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.488092 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86f8ab04-83b9-497b-a8b4-cde27e61d568-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "86f8ab04-83b9-497b-a8b4-cde27e61d568" (UID: "86f8ab04-83b9-497b-a8b4-cde27e61d568"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.488269 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "86f8ab04-83b9-497b-a8b4-cde27e61d568" (UID: "86f8ab04-83b9-497b-a8b4-cde27e61d568"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.489549 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "86f8ab04-83b9-497b-a8b4-cde27e61d568" (UID: "86f8ab04-83b9-497b-a8b4-cde27e61d568"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.493273 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "86f8ab04-83b9-497b-a8b4-cde27e61d568" (UID: "86f8ab04-83b9-497b-a8b4-cde27e61d568"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.493751 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "86f8ab04-83b9-497b-a8b4-cde27e61d568" (UID: "86f8ab04-83b9-497b-a8b4-cde27e61d568"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.494267 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "86f8ab04-83b9-497b-a8b4-cde27e61d568" (UID: "86f8ab04-83b9-497b-a8b4-cde27e61d568"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.496421 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "86f8ab04-83b9-497b-a8b4-cde27e61d568" (UID: "86f8ab04-83b9-497b-a8b4-cde27e61d568"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.498681 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "86f8ab04-83b9-497b-a8b4-cde27e61d568" (UID: "86f8ab04-83b9-497b-a8b4-cde27e61d568"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.499236 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "86f8ab04-83b9-497b-a8b4-cde27e61d568" (UID: "86f8ab04-83b9-497b-a8b4-cde27e61d568"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.500211 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "86f8ab04-83b9-497b-a8b4-cde27e61d568" (UID: "86f8ab04-83b9-497b-a8b4-cde27e61d568"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.506321 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86f8ab04-83b9-497b-a8b4-cde27e61d568-kube-api-access-nf5mb" (OuterVolumeSpecName: "kube-api-access-nf5mb") pod "86f8ab04-83b9-497b-a8b4-cde27e61d568" (UID: "86f8ab04-83b9-497b-a8b4-cde27e61d568"). InnerVolumeSpecName "kube-api-access-nf5mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.506573 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "86f8ab04-83b9-497b-a8b4-cde27e61d568" (UID: "86f8ab04-83b9-497b-a8b4-cde27e61d568"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.588169 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.588226 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.588249 4981 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86f8ab04-83b9-497b-a8b4-cde27e61d568-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.588273 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.588294 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.588313 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.588332 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf5mb\" (UniqueName: \"kubernetes.io/projected/86f8ab04-83b9-497b-a8b4-cde27e61d568-kube-api-access-nf5mb\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.588350 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.588368 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.588386 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.588404 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.588421 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.588441 4981 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/86f8ab04-83b9-497b-a8b4-cde27e61d568-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.588459 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/86f8ab04-83b9-497b-a8b4-cde27e61d568-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.679260 4981 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.700348 4981 generic.go:334] "Generic (PLEG): container finished" podID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerID="cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569" exitCode=0 Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.700448 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerDied","Data":"cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569"} Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.700507 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerStarted","Data":"1cf48ba9e38f3906931ef155e1b3ac43296d5152b357073cef60a56716eb0e06"} Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.707195 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" event={"ID":"86f8ab04-83b9-497b-a8b4-cde27e61d568","Type":"ContainerDied","Data":"5b8816fd66a52dddc23a29e2084d8bff64e7adb1fc379b4aa62f8a421597ca1f"} Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.707257 4981 scope.go:117] "RemoveContainer" containerID="a60cec4e64bc7442546f52ffe51a9d58559eda79dadaaf65106b275d95021031" Feb 27 18:49:50 crc kubenswrapper[4981]: I0227 18:49:50.707408 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-v4vk8" Feb 27 18:49:51 crc kubenswrapper[4981]: I0227 18:49:51.656464 4981 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2ae4358c-63d0-452a-91ac-ac74614e1e21" Feb 27 18:49:51 crc kubenswrapper[4981]: I0227 18:49:51.717498 4981 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5f0410cc-d6fa-4d09-b129-480c0d96f91a" Feb 27 18:49:51 crc kubenswrapper[4981]: I0227 18:49:51.717526 4981 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5f0410cc-d6fa-4d09-b129-480c0d96f91a" Feb 27 18:49:51 crc kubenswrapper[4981]: I0227 18:49:51.721871 4981 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2ae4358c-63d0-452a-91ac-ac74614e1e21" Feb 27 18:49:51 crc kubenswrapper[4981]: I0227 18:49:51.722309 4981 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://42cec03b8b931dc7da5c8858f24ed1d67622db025690ce247974f43fb246ed4c" Feb 27 18:49:51 crc kubenswrapper[4981]: I0227 18:49:51.722326 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:49:52 crc kubenswrapper[4981]: I0227 18:49:52.734552 4981 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5f0410cc-d6fa-4d09-b129-480c0d96f91a" Feb 27 18:49:52 crc kubenswrapper[4981]: I0227 18:49:52.735095 4981 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5f0410cc-d6fa-4d09-b129-480c0d96f91a" Feb 27 18:49:52 crc kubenswrapper[4981]: I0227 18:49:52.748686 4981 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2ae4358c-63d0-452a-91ac-ac74614e1e21" Feb 27 18:49:59 crc kubenswrapper[4981]: I0227 18:49:59.897711 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:49:59 crc kubenswrapper[4981]: I0227 18:49:59.904983 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 18:50:00 crc kubenswrapper[4981]: I0227 18:50:00.621779 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 27 18:50:00 crc kubenswrapper[4981]: I0227 18:50:00.632046 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 27 18:50:00 crc kubenswrapper[4981]: I0227 18:50:00.784911 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 27 18:50:00 crc kubenswrapper[4981]: I0227 18:50:00.956131 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 27 18:50:01 crc kubenswrapper[4981]: I0227 18:50:01.079383 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 27 18:50:01 crc kubenswrapper[4981]: I0227 18:50:01.131459 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 27 18:50:01 crc kubenswrapper[4981]: I0227 18:50:01.509872 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 27 18:50:01 crc kubenswrapper[4981]: I0227 18:50:01.775145 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 27 18:50:01 crc kubenswrapper[4981]: I0227 18:50:01.883902 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 27 18:50:01 crc kubenswrapper[4981]: I0227 18:50:01.967906 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 27 18:50:01 crc kubenswrapper[4981]: I0227 18:50:01.970309 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 27 18:50:02 crc kubenswrapper[4981]: I0227 18:50:02.046192 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 27 18:50:02 crc kubenswrapper[4981]: I0227 18:50:02.348443 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 18:50:02 crc kubenswrapper[4981]: I0227 18:50:02.638314 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 27 18:50:02 crc kubenswrapper[4981]: I0227 18:50:02.700694 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 27 18:50:02 crc kubenswrapper[4981]: I0227 18:50:02.823800 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 18:50:02 crc kubenswrapper[4981]: I0227 18:50:02.945790 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 27 18:50:03 crc kubenswrapper[4981]: I0227 18:50:03.166679 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 18:50:03 crc kubenswrapper[4981]: I0227 18:50:03.330701 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 27 18:50:03 crc kubenswrapper[4981]: I0227 18:50:03.347245 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 27 18:50:03 crc kubenswrapper[4981]: I0227 18:50:03.351380 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 27 18:50:03 crc kubenswrapper[4981]: I0227 18:50:03.375810 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 27 18:50:03 crc kubenswrapper[4981]: I0227 18:50:03.422789 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 27 18:50:03 crc kubenswrapper[4981]: I0227 18:50:03.511956 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 27 18:50:03 crc kubenswrapper[4981]: I0227 18:50:03.611619 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 27 18:50:03 crc kubenswrapper[4981]: I0227 18:50:03.649620 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 27 18:50:03 crc kubenswrapper[4981]: I0227 18:50:03.849024 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 27 18:50:03 crc kubenswrapper[4981]: I0227 18:50:03.873505 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 27 18:50:03 crc kubenswrapper[4981]: I0227 18:50:03.888856 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 27 18:50:03 crc kubenswrapper[4981]: I0227 18:50:03.993842 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 27 18:50:04 crc kubenswrapper[4981]: I0227 18:50:04.033519 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 27 18:50:04 crc kubenswrapper[4981]: I0227 18:50:04.150782 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 27 18:50:04 crc kubenswrapper[4981]: I0227 18:50:04.178024 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 27 18:50:04 crc kubenswrapper[4981]: I0227 18:50:04.183740 4981 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 27 18:50:04 crc kubenswrapper[4981]: I0227 18:50:04.188762 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=34.18874067 podStartE2EDuration="34.18874067s" podCreationTimestamp="2026-02-27 18:49:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:49:50.856695563 +0000 UTC m=+290.335476723" watchObservedRunningTime="2026-02-27 18:50:04.18874067 +0000 UTC m=+303.667521840" Feb 27 18:50:04 crc kubenswrapper[4981]: I0227 18:50:04.190680 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v4vk8","openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 18:50:04 crc kubenswrapper[4981]: I0227 18:50:04.190748 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 18:50:04 crc kubenswrapper[4981]: I0227 18:50:04.199668 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 18:50:04 crc kubenswrapper[4981]: I0227 18:50:04.214852 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.214834975 podStartE2EDuration="14.214834975s" podCreationTimestamp="2026-02-27 18:49:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:50:04.212233325 +0000 UTC m=+303.691014495" watchObservedRunningTime="2026-02-27 18:50:04.214834975 +0000 UTC m=+303.693616145" Feb 27 18:50:04 crc kubenswrapper[4981]: I0227 18:50:04.277854 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 27 18:50:04 crc kubenswrapper[4981]: I0227 18:50:04.282703 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 27 18:50:04 crc kubenswrapper[4981]: I0227 18:50:04.316301 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 27 18:50:04 crc kubenswrapper[4981]: I0227 18:50:04.660431 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 27 18:50:04 crc kubenswrapper[4981]: I0227 18:50:04.702770 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 27 18:50:04 crc kubenswrapper[4981]: I0227 18:50:04.708798 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 27 18:50:04 crc kubenswrapper[4981]: I0227 18:50:04.965093 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.080600 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.100623 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.104977 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.132349 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.188866 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.195695 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.310449 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.428812 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.445219 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.506678 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.519574 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.584774 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.611993 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5b9d67559d-jwcvt"] Feb 27 18:50:05 crc kubenswrapper[4981]: E0227 18:50:05.612348 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f8ab04-83b9-497b-a8b4-cde27e61d568" containerName="oauth-openshift" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.612392 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f8ab04-83b9-497b-a8b4-cde27e61d568" containerName="oauth-openshift" Feb 27 18:50:05 crc kubenswrapper[4981]: E0227 18:50:05.612426 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" containerName="installer" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.612440 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" containerName="installer" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.612673 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f8ab04-83b9-497b-a8b4-cde27e61d568" containerName="oauth-openshift" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.612701 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e3c2f3d-bb29-457d-b1a9-8c0ce34e5d70" containerName="installer" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.613421 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.618864 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.622037 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.622380 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.622630 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.622766 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.623133 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.623495 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.623638 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.623774 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.623932 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.624027 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.625938 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.636387 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.640655 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.654041 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86f8ab04-83b9-497b-a8b4-cde27e61d568" path="/var/lib/kubelet/pods/86f8ab04-83b9-497b-a8b4-cde27e61d568/volumes" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.663145 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.713883 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.727845 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7jpk\" (UniqueName: \"kubernetes.io/projected/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-kube-api-access-z7jpk\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.727926 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.727995 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.728034 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.728140 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.728219 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.728318 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-audit-policies\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.728388 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.728436 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-user-template-error\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.728486 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.728528 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-session\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.728642 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-audit-dir\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.728704 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-user-template-login\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.728835 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.799415 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.829807 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-audit-dir\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.829873 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-user-template-login\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.829921 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.829982 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7jpk\" (UniqueName: \"kubernetes.io/projected/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-kube-api-access-z7jpk\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.830022 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-audit-dir\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.830037 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.830195 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.830250 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.830300 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.830347 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.830453 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-audit-policies\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.830484 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.830540 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-user-template-error\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.830618 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.830670 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-session\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.832228 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-audit-policies\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.832932 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.837526 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.838798 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.840029 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.841003 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-user-template-login\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.841424 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-session\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.841723 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.843734 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.844573 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.844758 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-user-template-error\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.852320 4981 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.853827 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.854899 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.864026 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7jpk\" (UniqueName: \"kubernetes.io/projected/33e9e9a0-6a17-4374-b89e-d00c9d1d794c-kube-api-access-z7jpk\") pod \"oauth-openshift-5b9d67559d-jwcvt\" (UID: \"33e9e9a0-6a17-4374-b89e-d00c9d1d794c\") " pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.954471 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:05 crc kubenswrapper[4981]: I0227 18:50:05.957647 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 27 18:50:06 crc kubenswrapper[4981]: I0227 18:50:06.048822 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 27 18:50:06 crc kubenswrapper[4981]: I0227 18:50:06.114587 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 27 18:50:06 crc kubenswrapper[4981]: I0227 18:50:06.123643 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 27 18:50:06 crc kubenswrapper[4981]: I0227 18:50:06.189103 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 27 18:50:06 crc kubenswrapper[4981]: I0227 18:50:06.288653 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 27 18:50:06 crc kubenswrapper[4981]: I0227 18:50:06.472005 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 27 18:50:06 crc kubenswrapper[4981]: I0227 18:50:06.516751 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 27 18:50:06 crc kubenswrapper[4981]: I0227 18:50:06.635446 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 27 18:50:06 crc kubenswrapper[4981]: I0227 18:50:06.667439 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 27 18:50:06 crc kubenswrapper[4981]: I0227 18:50:06.671012 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 27 18:50:06 crc kubenswrapper[4981]: I0227 18:50:06.748919 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 27 18:50:06 crc kubenswrapper[4981]: I0227 18:50:06.832860 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 27 18:50:06 crc kubenswrapper[4981]: I0227 18:50:06.844282 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 27 18:50:06 crc kubenswrapper[4981]: I0227 18:50:06.976298 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 27 18:50:07 crc kubenswrapper[4981]: I0227 18:50:07.071730 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 18:50:07 crc kubenswrapper[4981]: I0227 18:50:07.129847 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 27 18:50:07 crc kubenswrapper[4981]: I0227 18:50:07.239314 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 27 18:50:07 crc kubenswrapper[4981]: I0227 18:50:07.276630 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 27 18:50:07 crc kubenswrapper[4981]: I0227 18:50:07.279574 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 27 18:50:07 crc kubenswrapper[4981]: I0227 18:50:07.412453 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 27 18:50:07 crc kubenswrapper[4981]: I0227 18:50:07.665128 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 27 18:50:07 crc kubenswrapper[4981]: I0227 18:50:07.736935 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 27 18:50:07 crc kubenswrapper[4981]: I0227 18:50:07.770421 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 27 18:50:07 crc kubenswrapper[4981]: I0227 18:50:07.874910 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 27 18:50:07 crc kubenswrapper[4981]: I0227 18:50:07.929894 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.049405 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.125353 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.134455 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.138880 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.147805 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.271223 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.285537 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.296302 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.321826 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.336483 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.339925 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.356431 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.359022 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.394745 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.428887 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.441508 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.448025 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.460647 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.566932 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.601872 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.648385 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.652976 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.675716 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536970-cm5kp"] Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.676400 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536970-cm5kp" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.678867 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.682742 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.682757 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.693671 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.731471 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.747189 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.792754 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjdj7\" (UniqueName: \"kubernetes.io/projected/79d53422-a900-419e-8027-602fa5b1401f-kube-api-access-pjdj7\") pod \"auto-csr-approver-29536970-cm5kp\" (UID: \"79d53422-a900-419e-8027-602fa5b1401f\") " pod="openshift-infra/auto-csr-approver-29536970-cm5kp" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.805734 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.893879 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjdj7\" (UniqueName: \"kubernetes.io/projected/79d53422-a900-419e-8027-602fa5b1401f-kube-api-access-pjdj7\") pod \"auto-csr-approver-29536970-cm5kp\" (UID: \"79d53422-a900-419e-8027-602fa5b1401f\") " pod="openshift-infra/auto-csr-approver-29536970-cm5kp" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.895558 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.923070 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjdj7\" (UniqueName: \"kubernetes.io/projected/79d53422-a900-419e-8027-602fa5b1401f-kube-api-access-pjdj7\") pod \"auto-csr-approver-29536970-cm5kp\" (UID: \"79d53422-a900-419e-8027-602fa5b1401f\") " pod="openshift-infra/auto-csr-approver-29536970-cm5kp" Feb 27 18:50:08 crc kubenswrapper[4981]: I0227 18:50:08.991522 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536970-cm5kp" Feb 27 18:50:09 crc kubenswrapper[4981]: I0227 18:50:09.031172 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 27 18:50:09 crc kubenswrapper[4981]: I0227 18:50:09.032795 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 27 18:50:09 crc kubenswrapper[4981]: I0227 18:50:09.039801 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 27 18:50:09 crc kubenswrapper[4981]: I0227 18:50:09.073848 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 18:50:09 crc kubenswrapper[4981]: I0227 18:50:09.112299 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 27 18:50:09 crc kubenswrapper[4981]: I0227 18:50:09.173367 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 18:50:09 crc kubenswrapper[4981]: I0227 18:50:09.226975 4981 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 27 18:50:09 crc kubenswrapper[4981]: I0227 18:50:09.248488 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 27 18:50:09 crc kubenswrapper[4981]: I0227 18:50:09.329571 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 27 18:50:09 crc kubenswrapper[4981]: I0227 18:50:09.368825 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 27 18:50:09 crc kubenswrapper[4981]: I0227 18:50:09.455788 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 18:50:09 crc kubenswrapper[4981]: I0227 18:50:09.490307 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 27 18:50:09 crc kubenswrapper[4981]: I0227 18:50:09.538264 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 27 18:50:09 crc kubenswrapper[4981]: I0227 18:50:09.602297 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 27 18:50:09 crc kubenswrapper[4981]: I0227 18:50:09.716957 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 27 18:50:09 crc kubenswrapper[4981]: I0227 18:50:09.730948 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 27 18:50:09 crc kubenswrapper[4981]: I0227 18:50:09.966555 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 27 18:50:09 crc kubenswrapper[4981]: I0227 18:50:09.966861 4981 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.061369 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.116422 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.122448 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.151024 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.237134 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.241851 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.332201 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.380161 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.502097 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.519240 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5b9d67559d-jwcvt"] Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.526501 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536970-cm5kp"] Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.688390 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.692601 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.700006 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.747107 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.763258 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.824371 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.862399 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.870573 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.895984 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.958027 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 27 18:50:10 crc kubenswrapper[4981]: I0227 18:50:10.969271 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 27 18:50:11 crc kubenswrapper[4981]: E0227 18:50:11.059720 4981 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 27 18:50:11 crc kubenswrapper[4981]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5b9d67559d-jwcvt_openshift-authentication_33e9e9a0-6a17-4374-b89e-d00c9d1d794c_0(0e83eefa569c72e36a91b1d5682aff78f01441bb1ce628b62eaa8f4c95798d8c): error adding pod openshift-authentication_oauth-openshift-5b9d67559d-jwcvt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0e83eefa569c72e36a91b1d5682aff78f01441bb1ce628b62eaa8f4c95798d8c" Netns:"/var/run/netns/50d6b21e-281f-45e6-b26a-db6678a3725b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5b9d67559d-jwcvt;K8S_POD_INFRA_CONTAINER_ID=0e83eefa569c72e36a91b1d5682aff78f01441bb1ce628b62eaa8f4c95798d8c;K8S_POD_UID=33e9e9a0-6a17-4374-b89e-d00c9d1d794c" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5b9d67559d-jwcvt] networking: Multus: [openshift-authentication/oauth-openshift-5b9d67559d-jwcvt/33e9e9a0-6a17-4374-b89e-d00c9d1d794c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5b9d67559d-jwcvt in out of cluster comm: pod "oauth-openshift-5b9d67559d-jwcvt" not found Feb 27 18:50:11 crc kubenswrapper[4981]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 18:50:11 crc kubenswrapper[4981]: > Feb 27 18:50:11 crc kubenswrapper[4981]: E0227 18:50:11.060000 4981 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 27 18:50:11 crc kubenswrapper[4981]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5b9d67559d-jwcvt_openshift-authentication_33e9e9a0-6a17-4374-b89e-d00c9d1d794c_0(0e83eefa569c72e36a91b1d5682aff78f01441bb1ce628b62eaa8f4c95798d8c): error adding pod openshift-authentication_oauth-openshift-5b9d67559d-jwcvt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0e83eefa569c72e36a91b1d5682aff78f01441bb1ce628b62eaa8f4c95798d8c" Netns:"/var/run/netns/50d6b21e-281f-45e6-b26a-db6678a3725b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5b9d67559d-jwcvt;K8S_POD_INFRA_CONTAINER_ID=0e83eefa569c72e36a91b1d5682aff78f01441bb1ce628b62eaa8f4c95798d8c;K8S_POD_UID=33e9e9a0-6a17-4374-b89e-d00c9d1d794c" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5b9d67559d-jwcvt] networking: Multus: [openshift-authentication/oauth-openshift-5b9d67559d-jwcvt/33e9e9a0-6a17-4374-b89e-d00c9d1d794c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5b9d67559d-jwcvt in out of cluster comm: pod "oauth-openshift-5b9d67559d-jwcvt" not found Feb 27 18:50:11 crc kubenswrapper[4981]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 18:50:11 crc kubenswrapper[4981]: > pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:11 crc kubenswrapper[4981]: E0227 18:50:11.060022 4981 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 27 18:50:11 crc kubenswrapper[4981]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5b9d67559d-jwcvt_openshift-authentication_33e9e9a0-6a17-4374-b89e-d00c9d1d794c_0(0e83eefa569c72e36a91b1d5682aff78f01441bb1ce628b62eaa8f4c95798d8c): error adding pod openshift-authentication_oauth-openshift-5b9d67559d-jwcvt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0e83eefa569c72e36a91b1d5682aff78f01441bb1ce628b62eaa8f4c95798d8c" Netns:"/var/run/netns/50d6b21e-281f-45e6-b26a-db6678a3725b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5b9d67559d-jwcvt;K8S_POD_INFRA_CONTAINER_ID=0e83eefa569c72e36a91b1d5682aff78f01441bb1ce628b62eaa8f4c95798d8c;K8S_POD_UID=33e9e9a0-6a17-4374-b89e-d00c9d1d794c" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5b9d67559d-jwcvt] networking: Multus: [openshift-authentication/oauth-openshift-5b9d67559d-jwcvt/33e9e9a0-6a17-4374-b89e-d00c9d1d794c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5b9d67559d-jwcvt in out of cluster comm: pod "oauth-openshift-5b9d67559d-jwcvt" not found Feb 27 18:50:11 crc kubenswrapper[4981]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 18:50:11 crc kubenswrapper[4981]: > pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:11 crc kubenswrapper[4981]: E0227 18:50:11.060096 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-5b9d67559d-jwcvt_openshift-authentication(33e9e9a0-6a17-4374-b89e-d00c9d1d794c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-5b9d67559d-jwcvt_openshift-authentication(33e9e9a0-6a17-4374-b89e-d00c9d1d794c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5b9d67559d-jwcvt_openshift-authentication_33e9e9a0-6a17-4374-b89e-d00c9d1d794c_0(0e83eefa569c72e36a91b1d5682aff78f01441bb1ce628b62eaa8f4c95798d8c): error adding pod openshift-authentication_oauth-openshift-5b9d67559d-jwcvt to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"0e83eefa569c72e36a91b1d5682aff78f01441bb1ce628b62eaa8f4c95798d8c\\\" Netns:\\\"/var/run/netns/50d6b21e-281f-45e6-b26a-db6678a3725b\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5b9d67559d-jwcvt;K8S_POD_INFRA_CONTAINER_ID=0e83eefa569c72e36a91b1d5682aff78f01441bb1ce628b62eaa8f4c95798d8c;K8S_POD_UID=33e9e9a0-6a17-4374-b89e-d00c9d1d794c\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5b9d67559d-jwcvt] networking: Multus: [openshift-authentication/oauth-openshift-5b9d67559d-jwcvt/33e9e9a0-6a17-4374-b89e-d00c9d1d794c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5b9d67559d-jwcvt in out of cluster comm: pod \\\"oauth-openshift-5b9d67559d-jwcvt\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" podUID="33e9e9a0-6a17-4374-b89e-d00c9d1d794c" Feb 27 18:50:11 crc kubenswrapper[4981]: I0227 18:50:11.093139 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 27 18:50:11 crc kubenswrapper[4981]: I0227 18:50:11.169026 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 18:50:11 crc kubenswrapper[4981]: I0227 18:50:11.192401 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 27 18:50:11 crc kubenswrapper[4981]: I0227 18:50:11.253859 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 27 18:50:11 crc kubenswrapper[4981]: I0227 18:50:11.442532 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 27 18:50:11 crc kubenswrapper[4981]: I0227 18:50:11.540510 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 27 18:50:11 crc kubenswrapper[4981]: I0227 18:50:11.548360 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 27 18:50:11 crc kubenswrapper[4981]: I0227 18:50:11.628134 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 18:50:11 crc kubenswrapper[4981]: I0227 18:50:11.659305 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 27 18:50:11 crc kubenswrapper[4981]: I0227 18:50:11.755624 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 27 18:50:11 crc kubenswrapper[4981]: I0227 18:50:11.873208 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:11 crc kubenswrapper[4981]: I0227 18:50:11.873861 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:11 crc kubenswrapper[4981]: I0227 18:50:11.877372 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.002326 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.031237 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.092245 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.129835 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.184895 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.277578 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 27 18:50:12 crc kubenswrapper[4981]: E0227 18:50:12.281123 4981 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 27 18:50:12 crc kubenswrapper[4981]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29536970-cm5kp_openshift-infra_79d53422-a900-419e-8027-602fa5b1401f_0(575e98370863891074f8411bbfef9e1ee149d27ad2173dde9543ecc763c631bc): error adding pod openshift-infra_auto-csr-approver-29536970-cm5kp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"575e98370863891074f8411bbfef9e1ee149d27ad2173dde9543ecc763c631bc" Netns:"/var/run/netns/36ec41b6-d146-4a43-b5af-0b7ea834cf96" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-infra;K8S_POD_NAME=auto-csr-approver-29536970-cm5kp;K8S_POD_INFRA_CONTAINER_ID=575e98370863891074f8411bbfef9e1ee149d27ad2173dde9543ecc763c631bc;K8S_POD_UID=79d53422-a900-419e-8027-602fa5b1401f" Path:"" ERRORED: error configuring pod [openshift-infra/auto-csr-approver-29536970-cm5kp] networking: Multus: [openshift-infra/auto-csr-approver-29536970-cm5kp/79d53422-a900-419e-8027-602fa5b1401f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod auto-csr-approver-29536970-cm5kp in out of cluster comm: pod "auto-csr-approver-29536970-cm5kp" not found Feb 27 18:50:12 crc kubenswrapper[4981]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 18:50:12 crc kubenswrapper[4981]: > Feb 27 18:50:12 crc kubenswrapper[4981]: E0227 18:50:12.281214 4981 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 27 18:50:12 crc kubenswrapper[4981]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29536970-cm5kp_openshift-infra_79d53422-a900-419e-8027-602fa5b1401f_0(575e98370863891074f8411bbfef9e1ee149d27ad2173dde9543ecc763c631bc): error adding pod openshift-infra_auto-csr-approver-29536970-cm5kp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"575e98370863891074f8411bbfef9e1ee149d27ad2173dde9543ecc763c631bc" Netns:"/var/run/netns/36ec41b6-d146-4a43-b5af-0b7ea834cf96" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-infra;K8S_POD_NAME=auto-csr-approver-29536970-cm5kp;K8S_POD_INFRA_CONTAINER_ID=575e98370863891074f8411bbfef9e1ee149d27ad2173dde9543ecc763c631bc;K8S_POD_UID=79d53422-a900-419e-8027-602fa5b1401f" Path:"" ERRORED: error configuring pod [openshift-infra/auto-csr-approver-29536970-cm5kp] networking: Multus: [openshift-infra/auto-csr-approver-29536970-cm5kp/79d53422-a900-419e-8027-602fa5b1401f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod auto-csr-approver-29536970-cm5kp in out of cluster comm: pod "auto-csr-approver-29536970-cm5kp" not found Feb 27 18:50:12 crc kubenswrapper[4981]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 18:50:12 crc kubenswrapper[4981]: > pod="openshift-infra/auto-csr-approver-29536970-cm5kp" Feb 27 18:50:12 crc kubenswrapper[4981]: E0227 18:50:12.281244 4981 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 27 18:50:12 crc kubenswrapper[4981]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29536970-cm5kp_openshift-infra_79d53422-a900-419e-8027-602fa5b1401f_0(575e98370863891074f8411bbfef9e1ee149d27ad2173dde9543ecc763c631bc): error adding pod openshift-infra_auto-csr-approver-29536970-cm5kp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"575e98370863891074f8411bbfef9e1ee149d27ad2173dde9543ecc763c631bc" Netns:"/var/run/netns/36ec41b6-d146-4a43-b5af-0b7ea834cf96" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-infra;K8S_POD_NAME=auto-csr-approver-29536970-cm5kp;K8S_POD_INFRA_CONTAINER_ID=575e98370863891074f8411bbfef9e1ee149d27ad2173dde9543ecc763c631bc;K8S_POD_UID=79d53422-a900-419e-8027-602fa5b1401f" Path:"" ERRORED: error configuring pod [openshift-infra/auto-csr-approver-29536970-cm5kp] networking: Multus: [openshift-infra/auto-csr-approver-29536970-cm5kp/79d53422-a900-419e-8027-602fa5b1401f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod auto-csr-approver-29536970-cm5kp in out of cluster comm: pod "auto-csr-approver-29536970-cm5kp" not found Feb 27 18:50:12 crc kubenswrapper[4981]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 18:50:12 crc kubenswrapper[4981]: > pod="openshift-infra/auto-csr-approver-29536970-cm5kp" Feb 27 18:50:12 crc kubenswrapper[4981]: E0227 18:50:12.281320 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29536970-cm5kp_openshift-infra(79d53422-a900-419e-8027-602fa5b1401f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29536970-cm5kp_openshift-infra(79d53422-a900-419e-8027-602fa5b1401f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29536970-cm5kp_openshift-infra_79d53422-a900-419e-8027-602fa5b1401f_0(575e98370863891074f8411bbfef9e1ee149d27ad2173dde9543ecc763c631bc): error adding pod openshift-infra_auto-csr-approver-29536970-cm5kp to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"575e98370863891074f8411bbfef9e1ee149d27ad2173dde9543ecc763c631bc\\\" Netns:\\\"/var/run/netns/36ec41b6-d146-4a43-b5af-0b7ea834cf96\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-infra;K8S_POD_NAME=auto-csr-approver-29536970-cm5kp;K8S_POD_INFRA_CONTAINER_ID=575e98370863891074f8411bbfef9e1ee149d27ad2173dde9543ecc763c631bc;K8S_POD_UID=79d53422-a900-419e-8027-602fa5b1401f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-infra/auto-csr-approver-29536970-cm5kp] networking: Multus: [openshift-infra/auto-csr-approver-29536970-cm5kp/79d53422-a900-419e-8027-602fa5b1401f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod auto-csr-approver-29536970-cm5kp in out of cluster comm: pod \\\"auto-csr-approver-29536970-cm5kp\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-infra/auto-csr-approver-29536970-cm5kp" podUID="79d53422-a900-419e-8027-602fa5b1401f" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.502279 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.524919 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.546357 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.580853 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.696698 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.707715 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.744079 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.762148 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.779887 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.780973 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.844605 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.879179 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536970-cm5kp" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.879886 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536970-cm5kp" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.893595 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.912791 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 27 18:50:12 crc kubenswrapper[4981]: I0227 18:50:12.920273 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 27 18:50:13 crc kubenswrapper[4981]: I0227 18:50:13.003571 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 27 18:50:13 crc kubenswrapper[4981]: I0227 18:50:13.037477 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 27 18:50:13 crc kubenswrapper[4981]: I0227 18:50:13.079081 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 27 18:50:13 crc kubenswrapper[4981]: I0227 18:50:13.232761 4981 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 27 18:50:13 crc kubenswrapper[4981]: I0227 18:50:13.263646 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 27 18:50:13 crc kubenswrapper[4981]: I0227 18:50:13.292871 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 27 18:50:13 crc kubenswrapper[4981]: I0227 18:50:13.386086 4981 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 18:50:13 crc kubenswrapper[4981]: I0227 18:50:13.386391 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://d0dddaf001fea64aa6323b06d57818028076019f6eee0b79662b815ed4d53ac5" gracePeriod=5 Feb 27 18:50:13 crc kubenswrapper[4981]: I0227 18:50:13.430256 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 18:50:13 crc kubenswrapper[4981]: I0227 18:50:13.453435 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 18:50:13 crc kubenswrapper[4981]: I0227 18:50:13.464251 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 27 18:50:13 crc kubenswrapper[4981]: I0227 18:50:13.529014 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 27 18:50:13 crc kubenswrapper[4981]: I0227 18:50:13.605653 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 27 18:50:13 crc kubenswrapper[4981]: I0227 18:50:13.658846 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 27 18:50:13 crc kubenswrapper[4981]: I0227 18:50:13.727165 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 27 18:50:13 crc kubenswrapper[4981]: I0227 18:50:13.811634 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 18:50:13 crc kubenswrapper[4981]: I0227 18:50:13.834966 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 27 18:50:13 crc kubenswrapper[4981]: I0227 18:50:13.874123 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 27 18:50:13 crc kubenswrapper[4981]: I0227 18:50:13.876992 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 27 18:50:14 crc kubenswrapper[4981]: I0227 18:50:14.063176 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 27 18:50:14 crc kubenswrapper[4981]: I0227 18:50:14.192758 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 27 18:50:14 crc kubenswrapper[4981]: I0227 18:50:14.304866 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 27 18:50:14 crc kubenswrapper[4981]: I0227 18:50:14.710823 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 27 18:50:14 crc kubenswrapper[4981]: I0227 18:50:14.772748 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 27 18:50:14 crc kubenswrapper[4981]: I0227 18:50:14.787727 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 27 18:50:14 crc kubenswrapper[4981]: I0227 18:50:14.801737 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 27 18:50:14 crc kubenswrapper[4981]: I0227 18:50:14.850315 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 27 18:50:14 crc kubenswrapper[4981]: I0227 18:50:14.869209 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 27 18:50:14 crc kubenswrapper[4981]: I0227 18:50:14.917127 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 27 18:50:15 crc kubenswrapper[4981]: I0227 18:50:15.021548 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 27 18:50:15 crc kubenswrapper[4981]: I0227 18:50:15.063962 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 27 18:50:15 crc kubenswrapper[4981]: E0227 18:50:15.095903 4981 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 27 18:50:15 crc kubenswrapper[4981]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5b9d67559d-jwcvt_openshift-authentication_33e9e9a0-6a17-4374-b89e-d00c9d1d794c_0(b41de9646e6df5e29a7ce0de29ac716ec36277ff8e904d8b8812bcece0cd323a): error adding pod openshift-authentication_oauth-openshift-5b9d67559d-jwcvt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b41de9646e6df5e29a7ce0de29ac716ec36277ff8e904d8b8812bcece0cd323a" Netns:"/var/run/netns/f94aaefc-96a0-45af-8652-162a70d2ccb8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5b9d67559d-jwcvt;K8S_POD_INFRA_CONTAINER_ID=b41de9646e6df5e29a7ce0de29ac716ec36277ff8e904d8b8812bcece0cd323a;K8S_POD_UID=33e9e9a0-6a17-4374-b89e-d00c9d1d794c" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5b9d67559d-jwcvt] networking: Multus: [openshift-authentication/oauth-openshift-5b9d67559d-jwcvt/33e9e9a0-6a17-4374-b89e-d00c9d1d794c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5b9d67559d-jwcvt in out of cluster comm: pod "oauth-openshift-5b9d67559d-jwcvt" not found Feb 27 18:50:15 crc kubenswrapper[4981]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 18:50:15 crc kubenswrapper[4981]: > Feb 27 18:50:15 crc kubenswrapper[4981]: E0227 18:50:15.096232 4981 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 27 18:50:15 crc kubenswrapper[4981]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5b9d67559d-jwcvt_openshift-authentication_33e9e9a0-6a17-4374-b89e-d00c9d1d794c_0(b41de9646e6df5e29a7ce0de29ac716ec36277ff8e904d8b8812bcece0cd323a): error adding pod openshift-authentication_oauth-openshift-5b9d67559d-jwcvt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b41de9646e6df5e29a7ce0de29ac716ec36277ff8e904d8b8812bcece0cd323a" Netns:"/var/run/netns/f94aaefc-96a0-45af-8652-162a70d2ccb8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5b9d67559d-jwcvt;K8S_POD_INFRA_CONTAINER_ID=b41de9646e6df5e29a7ce0de29ac716ec36277ff8e904d8b8812bcece0cd323a;K8S_POD_UID=33e9e9a0-6a17-4374-b89e-d00c9d1d794c" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5b9d67559d-jwcvt] networking: Multus: [openshift-authentication/oauth-openshift-5b9d67559d-jwcvt/33e9e9a0-6a17-4374-b89e-d00c9d1d794c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5b9d67559d-jwcvt in out of cluster comm: pod "oauth-openshift-5b9d67559d-jwcvt" not found Feb 27 18:50:15 crc kubenswrapper[4981]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 18:50:15 crc kubenswrapper[4981]: > pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:15 crc kubenswrapper[4981]: E0227 18:50:15.096265 4981 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 27 18:50:15 crc kubenswrapper[4981]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5b9d67559d-jwcvt_openshift-authentication_33e9e9a0-6a17-4374-b89e-d00c9d1d794c_0(b41de9646e6df5e29a7ce0de29ac716ec36277ff8e904d8b8812bcece0cd323a): error adding pod openshift-authentication_oauth-openshift-5b9d67559d-jwcvt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b41de9646e6df5e29a7ce0de29ac716ec36277ff8e904d8b8812bcece0cd323a" Netns:"/var/run/netns/f94aaefc-96a0-45af-8652-162a70d2ccb8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5b9d67559d-jwcvt;K8S_POD_INFRA_CONTAINER_ID=b41de9646e6df5e29a7ce0de29ac716ec36277ff8e904d8b8812bcece0cd323a;K8S_POD_UID=33e9e9a0-6a17-4374-b89e-d00c9d1d794c" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5b9d67559d-jwcvt] networking: Multus: [openshift-authentication/oauth-openshift-5b9d67559d-jwcvt/33e9e9a0-6a17-4374-b89e-d00c9d1d794c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5b9d67559d-jwcvt in out of cluster comm: pod "oauth-openshift-5b9d67559d-jwcvt" not found Feb 27 18:50:15 crc kubenswrapper[4981]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 18:50:15 crc kubenswrapper[4981]: > pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:15 crc kubenswrapper[4981]: E0227 18:50:15.096330 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-5b9d67559d-jwcvt_openshift-authentication(33e9e9a0-6a17-4374-b89e-d00c9d1d794c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-5b9d67559d-jwcvt_openshift-authentication(33e9e9a0-6a17-4374-b89e-d00c9d1d794c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5b9d67559d-jwcvt_openshift-authentication_33e9e9a0-6a17-4374-b89e-d00c9d1d794c_0(b41de9646e6df5e29a7ce0de29ac716ec36277ff8e904d8b8812bcece0cd323a): error adding pod openshift-authentication_oauth-openshift-5b9d67559d-jwcvt to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"b41de9646e6df5e29a7ce0de29ac716ec36277ff8e904d8b8812bcece0cd323a\\\" Netns:\\\"/var/run/netns/f94aaefc-96a0-45af-8652-162a70d2ccb8\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-5b9d67559d-jwcvt;K8S_POD_INFRA_CONTAINER_ID=b41de9646e6df5e29a7ce0de29ac716ec36277ff8e904d8b8812bcece0cd323a;K8S_POD_UID=33e9e9a0-6a17-4374-b89e-d00c9d1d794c\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-5b9d67559d-jwcvt] networking: Multus: [openshift-authentication/oauth-openshift-5b9d67559d-jwcvt/33e9e9a0-6a17-4374-b89e-d00c9d1d794c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-5b9d67559d-jwcvt in out of cluster comm: pod \\\"oauth-openshift-5b9d67559d-jwcvt\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" podUID="33e9e9a0-6a17-4374-b89e-d00c9d1d794c" Feb 27 18:50:15 crc kubenswrapper[4981]: I0227 18:50:15.127859 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 27 18:50:15 crc kubenswrapper[4981]: I0227 18:50:15.154425 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 27 18:50:15 crc kubenswrapper[4981]: I0227 18:50:15.159507 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 27 18:50:15 crc kubenswrapper[4981]: I0227 18:50:15.271380 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 27 18:50:15 crc kubenswrapper[4981]: I0227 18:50:15.311475 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 27 18:50:15 crc kubenswrapper[4981]: I0227 18:50:15.341384 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 27 18:50:15 crc kubenswrapper[4981]: I0227 18:50:15.418695 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 27 18:50:15 crc kubenswrapper[4981]: I0227 18:50:15.603448 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 27 18:50:15 crc kubenswrapper[4981]: I0227 18:50:15.641577 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 27 18:50:15 crc kubenswrapper[4981]: I0227 18:50:15.668410 4981 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 27 18:50:15 crc kubenswrapper[4981]: I0227 18:50:15.734992 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 27 18:50:15 crc kubenswrapper[4981]: I0227 18:50:15.773668 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 27 18:50:16 crc kubenswrapper[4981]: I0227 18:50:16.009209 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 27 18:50:16 crc kubenswrapper[4981]: I0227 18:50:16.098134 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 18:50:16 crc kubenswrapper[4981]: E0227 18:50:16.169656 4981 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 27 18:50:16 crc kubenswrapper[4981]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29536970-cm5kp_openshift-infra_79d53422-a900-419e-8027-602fa5b1401f_0(65a00501017c41aba800277415c8ca703d5dea42a984a71c9c1f207f12f646f5): error adding pod openshift-infra_auto-csr-approver-29536970-cm5kp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"65a00501017c41aba800277415c8ca703d5dea42a984a71c9c1f207f12f646f5" Netns:"/var/run/netns/76894286-e100-42e1-964b-be4c64e010d1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-infra;K8S_POD_NAME=auto-csr-approver-29536970-cm5kp;K8S_POD_INFRA_CONTAINER_ID=65a00501017c41aba800277415c8ca703d5dea42a984a71c9c1f207f12f646f5;K8S_POD_UID=79d53422-a900-419e-8027-602fa5b1401f" Path:"" ERRORED: error configuring pod [openshift-infra/auto-csr-approver-29536970-cm5kp] networking: Multus: [openshift-infra/auto-csr-approver-29536970-cm5kp/79d53422-a900-419e-8027-602fa5b1401f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod auto-csr-approver-29536970-cm5kp in out of cluster comm: pod "auto-csr-approver-29536970-cm5kp" not found Feb 27 18:50:16 crc kubenswrapper[4981]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 18:50:16 crc kubenswrapper[4981]: > Feb 27 18:50:16 crc kubenswrapper[4981]: E0227 18:50:16.169777 4981 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 27 18:50:16 crc kubenswrapper[4981]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29536970-cm5kp_openshift-infra_79d53422-a900-419e-8027-602fa5b1401f_0(65a00501017c41aba800277415c8ca703d5dea42a984a71c9c1f207f12f646f5): error adding pod openshift-infra_auto-csr-approver-29536970-cm5kp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"65a00501017c41aba800277415c8ca703d5dea42a984a71c9c1f207f12f646f5" Netns:"/var/run/netns/76894286-e100-42e1-964b-be4c64e010d1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-infra;K8S_POD_NAME=auto-csr-approver-29536970-cm5kp;K8S_POD_INFRA_CONTAINER_ID=65a00501017c41aba800277415c8ca703d5dea42a984a71c9c1f207f12f646f5;K8S_POD_UID=79d53422-a900-419e-8027-602fa5b1401f" Path:"" ERRORED: error configuring pod [openshift-infra/auto-csr-approver-29536970-cm5kp] networking: Multus: [openshift-infra/auto-csr-approver-29536970-cm5kp/79d53422-a900-419e-8027-602fa5b1401f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod auto-csr-approver-29536970-cm5kp in out of cluster comm: pod "auto-csr-approver-29536970-cm5kp" not found Feb 27 18:50:16 crc kubenswrapper[4981]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 18:50:16 crc kubenswrapper[4981]: > pod="openshift-infra/auto-csr-approver-29536970-cm5kp" Feb 27 18:50:16 crc kubenswrapper[4981]: E0227 18:50:16.169809 4981 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 27 18:50:16 crc kubenswrapper[4981]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29536970-cm5kp_openshift-infra_79d53422-a900-419e-8027-602fa5b1401f_0(65a00501017c41aba800277415c8ca703d5dea42a984a71c9c1f207f12f646f5): error adding pod openshift-infra_auto-csr-approver-29536970-cm5kp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"65a00501017c41aba800277415c8ca703d5dea42a984a71c9c1f207f12f646f5" Netns:"/var/run/netns/76894286-e100-42e1-964b-be4c64e010d1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-infra;K8S_POD_NAME=auto-csr-approver-29536970-cm5kp;K8S_POD_INFRA_CONTAINER_ID=65a00501017c41aba800277415c8ca703d5dea42a984a71c9c1f207f12f646f5;K8S_POD_UID=79d53422-a900-419e-8027-602fa5b1401f" Path:"" ERRORED: error configuring pod [openshift-infra/auto-csr-approver-29536970-cm5kp] networking: Multus: [openshift-infra/auto-csr-approver-29536970-cm5kp/79d53422-a900-419e-8027-602fa5b1401f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod auto-csr-approver-29536970-cm5kp in out of cluster comm: pod "auto-csr-approver-29536970-cm5kp" not found Feb 27 18:50:16 crc kubenswrapper[4981]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 18:50:16 crc kubenswrapper[4981]: > pod="openshift-infra/auto-csr-approver-29536970-cm5kp" Feb 27 18:50:16 crc kubenswrapper[4981]: E0227 18:50:16.169905 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"auto-csr-approver-29536970-cm5kp_openshift-infra(79d53422-a900-419e-8027-602fa5b1401f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"auto-csr-approver-29536970-cm5kp_openshift-infra(79d53422-a900-419e-8027-602fa5b1401f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_auto-csr-approver-29536970-cm5kp_openshift-infra_79d53422-a900-419e-8027-602fa5b1401f_0(65a00501017c41aba800277415c8ca703d5dea42a984a71c9c1f207f12f646f5): error adding pod openshift-infra_auto-csr-approver-29536970-cm5kp to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"65a00501017c41aba800277415c8ca703d5dea42a984a71c9c1f207f12f646f5\\\" Netns:\\\"/var/run/netns/76894286-e100-42e1-964b-be4c64e010d1\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-infra;K8S_POD_NAME=auto-csr-approver-29536970-cm5kp;K8S_POD_INFRA_CONTAINER_ID=65a00501017c41aba800277415c8ca703d5dea42a984a71c9c1f207f12f646f5;K8S_POD_UID=79d53422-a900-419e-8027-602fa5b1401f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-infra/auto-csr-approver-29536970-cm5kp] networking: Multus: [openshift-infra/auto-csr-approver-29536970-cm5kp/79d53422-a900-419e-8027-602fa5b1401f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod auto-csr-approver-29536970-cm5kp in out of cluster comm: pod \\\"auto-csr-approver-29536970-cm5kp\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-infra/auto-csr-approver-29536970-cm5kp" podUID="79d53422-a900-419e-8027-602fa5b1401f" Feb 27 18:50:16 crc kubenswrapper[4981]: I0227 18:50:16.605525 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 27 18:50:16 crc kubenswrapper[4981]: I0227 18:50:16.627299 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 27 18:50:16 crc kubenswrapper[4981]: I0227 18:50:16.676008 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 27 18:50:16 crc kubenswrapper[4981]: I0227 18:50:16.735451 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 27 18:50:16 crc kubenswrapper[4981]: I0227 18:50:16.794587 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 27 18:50:17 crc kubenswrapper[4981]: I0227 18:50:17.110551 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 27 18:50:17 crc kubenswrapper[4981]: I0227 18:50:17.162242 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 27 18:50:17 crc kubenswrapper[4981]: I0227 18:50:17.231967 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 27 18:50:17 crc kubenswrapper[4981]: I0227 18:50:17.284634 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 27 18:50:17 crc kubenswrapper[4981]: I0227 18:50:17.490234 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 27 18:50:17 crc kubenswrapper[4981]: I0227 18:50:17.595114 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 27 18:50:17 crc kubenswrapper[4981]: I0227 18:50:17.645945 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 27 18:50:17 crc kubenswrapper[4981]: I0227 18:50:17.751648 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 27 18:50:18 crc kubenswrapper[4981]: I0227 18:50:18.176897 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 27 18:50:18 crc kubenswrapper[4981]: I0227 18:50:18.919594 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 27 18:50:18 crc kubenswrapper[4981]: I0227 18:50:18.919631 4981 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="d0dddaf001fea64aa6323b06d57818028076019f6eee0b79662b815ed4d53ac5" exitCode=137 Feb 27 18:50:18 crc kubenswrapper[4981]: I0227 18:50:18.984936 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 27 18:50:18 crc kubenswrapper[4981]: I0227 18:50:18.985041 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.154483 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.154566 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.154665 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.154691 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.154760 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.154821 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.154856 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.154962 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.154971 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.155556 4981 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.155601 4981 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.155620 4981 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.155637 4981 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.166955 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.189349 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.257536 4981 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.641545 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.642223 4981 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.657003 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.657308 4981 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6c75c90b-b3b3-470f-94d7-4daf8254b4a9" Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.664323 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.664380 4981 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6c75c90b-b3b3-470f-94d7-4daf8254b4a9" Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.928828 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.929244 4981 scope.go:117] "RemoveContainer" containerID="d0dddaf001fea64aa6323b06d57818028076019f6eee0b79662b815ed4d53ac5" Feb 27 18:50:19 crc kubenswrapper[4981]: I0227 18:50:19.929432 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 18:50:20 crc kubenswrapper[4981]: I0227 18:50:20.728254 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-947f9d7f9-cqzz8"] Feb 27 18:50:20 crc kubenswrapper[4981]: I0227 18:50:20.728620 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" podUID="b5c7be0b-ae01-4409-979a-0c6df564767a" containerName="controller-manager" containerID="cri-o://2e82427061133d64a7f1c93fb5b536e5a8aa01e8882886d59adca98427fa3012" gracePeriod=30 Feb 27 18:50:20 crc kubenswrapper[4981]: I0227 18:50:20.736429 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67"] Feb 27 18:50:20 crc kubenswrapper[4981]: I0227 18:50:20.736949 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" podUID="a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15" containerName="route-controller-manager" containerID="cri-o://311a688b683d488621b2cb3b19759aafa4eb4f51dd6fd2b2d47c137942d21a55" gracePeriod=30 Feb 27 18:50:20 crc kubenswrapper[4981]: I0227 18:50:20.939224 4981 generic.go:334] "Generic (PLEG): container finished" podID="b5c7be0b-ae01-4409-979a-0c6df564767a" containerID="2e82427061133d64a7f1c93fb5b536e5a8aa01e8882886d59adca98427fa3012" exitCode=0 Feb 27 18:50:20 crc kubenswrapper[4981]: I0227 18:50:20.939308 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" event={"ID":"b5c7be0b-ae01-4409-979a-0c6df564767a","Type":"ContainerDied","Data":"2e82427061133d64a7f1c93fb5b536e5a8aa01e8882886d59adca98427fa3012"} Feb 27 18:50:20 crc kubenswrapper[4981]: I0227 18:50:20.940906 4981 generic.go:334] "Generic (PLEG): container finished" podID="a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15" containerID="311a688b683d488621b2cb3b19759aafa4eb4f51dd6fd2b2d47c137942d21a55" exitCode=0 Feb 27 18:50:20 crc kubenswrapper[4981]: I0227 18:50:20.940940 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" event={"ID":"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15","Type":"ContainerDied","Data":"311a688b683d488621b2cb3b19759aafa4eb4f51dd6fd2b2d47c137942d21a55"} Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.281522 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.287664 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.390313 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-config\") pod \"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15\" (UID: \"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15\") " Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.390458 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv5q6\" (UniqueName: \"kubernetes.io/projected/b5c7be0b-ae01-4409-979a-0c6df564767a-kube-api-access-rv5q6\") pod \"b5c7be0b-ae01-4409-979a-0c6df564767a\" (UID: \"b5c7be0b-ae01-4409-979a-0c6df564767a\") " Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.390526 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-client-ca\") pod \"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15\" (UID: \"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15\") " Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.390591 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcjps\" (UniqueName: \"kubernetes.io/projected/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-kube-api-access-kcjps\") pod \"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15\" (UID: \"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15\") " Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.391569 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-client-ca" (OuterVolumeSpecName: "client-ca") pod "a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15" (UID: "a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.391746 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5c7be0b-ae01-4409-979a-0c6df564767a-client-ca" (OuterVolumeSpecName: "client-ca") pod "b5c7be0b-ae01-4409-979a-0c6df564767a" (UID: "b5c7be0b-ae01-4409-979a-0c6df564767a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.392021 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5c7be0b-ae01-4409-979a-0c6df564767a-client-ca\") pod \"b5c7be0b-ae01-4409-979a-0c6df564767a\" (UID: \"b5c7be0b-ae01-4409-979a-0c6df564767a\") " Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.392103 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-config" (OuterVolumeSpecName: "config") pod "a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15" (UID: "a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.392138 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5c7be0b-ae01-4409-979a-0c6df564767a-config\") pod \"b5c7be0b-ae01-4409-979a-0c6df564767a\" (UID: \"b5c7be0b-ae01-4409-979a-0c6df564767a\") " Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.392259 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5c7be0b-ae01-4409-979a-0c6df564767a-serving-cert\") pod \"b5c7be0b-ae01-4409-979a-0c6df564767a\" (UID: \"b5c7be0b-ae01-4409-979a-0c6df564767a\") " Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.392324 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5c7be0b-ae01-4409-979a-0c6df564767a-proxy-ca-bundles\") pod \"b5c7be0b-ae01-4409-979a-0c6df564767a\" (UID: \"b5c7be0b-ae01-4409-979a-0c6df564767a\") " Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.392407 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-serving-cert\") pod \"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15\" (UID: \"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15\") " Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.392788 4981 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5c7be0b-ae01-4409-979a-0c6df564767a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.392831 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.392855 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5c7be0b-ae01-4409-979a-0c6df564767a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b5c7be0b-ae01-4409-979a-0c6df564767a" (UID: "b5c7be0b-ae01-4409-979a-0c6df564767a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.392859 4981 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.393399 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5c7be0b-ae01-4409-979a-0c6df564767a-config" (OuterVolumeSpecName: "config") pod "b5c7be0b-ae01-4409-979a-0c6df564767a" (UID: "b5c7be0b-ae01-4409-979a-0c6df564767a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.397256 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-kube-api-access-kcjps" (OuterVolumeSpecName: "kube-api-access-kcjps") pod "a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15" (UID: "a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15"). InnerVolumeSpecName "kube-api-access-kcjps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.397382 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5c7be0b-ae01-4409-979a-0c6df564767a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b5c7be0b-ae01-4409-979a-0c6df564767a" (UID: "b5c7be0b-ae01-4409-979a-0c6df564767a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.397772 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15" (UID: "a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.399234 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5c7be0b-ae01-4409-979a-0c6df564767a-kube-api-access-rv5q6" (OuterVolumeSpecName: "kube-api-access-rv5q6") pod "b5c7be0b-ae01-4409-979a-0c6df564767a" (UID: "b5c7be0b-ae01-4409-979a-0c6df564767a"). InnerVolumeSpecName "kube-api-access-rv5q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.495151 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv5q6\" (UniqueName: \"kubernetes.io/projected/b5c7be0b-ae01-4409-979a-0c6df564767a-kube-api-access-rv5q6\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.495208 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcjps\" (UniqueName: \"kubernetes.io/projected/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-kube-api-access-kcjps\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.495230 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5c7be0b-ae01-4409-979a-0c6df564767a-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.495248 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5c7be0b-ae01-4409-979a-0c6df564767a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.495266 4981 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5c7be0b-ae01-4409-979a-0c6df564767a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.495284 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.917913 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq"] Feb 27 18:50:21 crc kubenswrapper[4981]: E0227 18:50:21.918244 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5c7be0b-ae01-4409-979a-0c6df564767a" containerName="controller-manager" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.918261 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5c7be0b-ae01-4409-979a-0c6df564767a" containerName="controller-manager" Feb 27 18:50:21 crc kubenswrapper[4981]: E0227 18:50:21.918289 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.918298 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 18:50:21 crc kubenswrapper[4981]: E0227 18:50:21.918317 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15" containerName="route-controller-manager" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.918326 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15" containerName="route-controller-manager" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.918448 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.918462 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5c7be0b-ae01-4409-979a-0c6df564767a" containerName="controller-manager" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.918477 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15" containerName="route-controller-manager" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.918923 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.932330 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-658f9978cf-9jd8z"] Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.952578 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.958152 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-658f9978cf-9jd8z"] Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.958716 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" event={"ID":"b5c7be0b-ae01-4409-979a-0c6df564767a","Type":"ContainerDied","Data":"841d21e2e4c8851764d6df943d8720039e4301679b99e2f3cce2a12e5a81d326"} Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.958773 4981 scope.go:117] "RemoveContainer" containerID="2e82427061133d64a7f1c93fb5b536e5a8aa01e8882886d59adca98427fa3012" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.958929 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-947f9d7f9-cqzz8" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.968646 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" event={"ID":"a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15","Type":"ContainerDied","Data":"9e07c414efa85cecce5896025271fee847a2e256c61f50cdbc57712862c7f21e"} Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.968751 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67" Feb 27 18:50:21 crc kubenswrapper[4981]: I0227 18:50:21.985712 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq"] Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.000373 4981 scope.go:117] "RemoveContainer" containerID="311a688b683d488621b2cb3b19759aafa4eb4f51dd6fd2b2d47c137942d21a55" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.010377 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-947f9d7f9-cqzz8"] Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.014881 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-947f9d7f9-cqzz8"] Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.020134 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67"] Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.024882 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59f4fd5997-6wh67"] Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.102577 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26208de0-aaf1-47c9-80bd-71ed7b659d40-config\") pod \"controller-manager-658f9978cf-9jd8z\" (UID: \"26208de0-aaf1-47c9-80bd-71ed7b659d40\") " pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.102647 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-client-ca\") pod \"route-controller-manager-66b857b88f-smrfq\" (UID: \"1dec596a-69b2-4ca4-8529-d3f5faabc0b0\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.102701 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26208de0-aaf1-47c9-80bd-71ed7b659d40-proxy-ca-bundles\") pod \"controller-manager-658f9978cf-9jd8z\" (UID: \"26208de0-aaf1-47c9-80bd-71ed7b659d40\") " pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.102780 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26208de0-aaf1-47c9-80bd-71ed7b659d40-serving-cert\") pod \"controller-manager-658f9978cf-9jd8z\" (UID: \"26208de0-aaf1-47c9-80bd-71ed7b659d40\") " pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.102839 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26208de0-aaf1-47c9-80bd-71ed7b659d40-client-ca\") pod \"controller-manager-658f9978cf-9jd8z\" (UID: \"26208de0-aaf1-47c9-80bd-71ed7b659d40\") " pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.102918 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-serving-cert\") pod \"route-controller-manager-66b857b88f-smrfq\" (UID: \"1dec596a-69b2-4ca4-8529-d3f5faabc0b0\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.102963 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-config\") pod \"route-controller-manager-66b857b88f-smrfq\" (UID: \"1dec596a-69b2-4ca4-8529-d3f5faabc0b0\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.102996 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prgn6\" (UniqueName: \"kubernetes.io/projected/26208de0-aaf1-47c9-80bd-71ed7b659d40-kube-api-access-prgn6\") pod \"controller-manager-658f9978cf-9jd8z\" (UID: \"26208de0-aaf1-47c9-80bd-71ed7b659d40\") " pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.103030 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwhcf\" (UniqueName: \"kubernetes.io/projected/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-kube-api-access-gwhcf\") pod \"route-controller-manager-66b857b88f-smrfq\" (UID: \"1dec596a-69b2-4ca4-8529-d3f5faabc0b0\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.204848 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26208de0-aaf1-47c9-80bd-71ed7b659d40-config\") pod \"controller-manager-658f9978cf-9jd8z\" (UID: \"26208de0-aaf1-47c9-80bd-71ed7b659d40\") " pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.204967 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-client-ca\") pod \"route-controller-manager-66b857b88f-smrfq\" (UID: \"1dec596a-69b2-4ca4-8529-d3f5faabc0b0\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.205046 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26208de0-aaf1-47c9-80bd-71ed7b659d40-proxy-ca-bundles\") pod \"controller-manager-658f9978cf-9jd8z\" (UID: \"26208de0-aaf1-47c9-80bd-71ed7b659d40\") " pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.205137 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26208de0-aaf1-47c9-80bd-71ed7b659d40-serving-cert\") pod \"controller-manager-658f9978cf-9jd8z\" (UID: \"26208de0-aaf1-47c9-80bd-71ed7b659d40\") " pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.205218 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26208de0-aaf1-47c9-80bd-71ed7b659d40-client-ca\") pod \"controller-manager-658f9978cf-9jd8z\" (UID: \"26208de0-aaf1-47c9-80bd-71ed7b659d40\") " pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.205327 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-serving-cert\") pod \"route-controller-manager-66b857b88f-smrfq\" (UID: \"1dec596a-69b2-4ca4-8529-d3f5faabc0b0\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.205393 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-config\") pod \"route-controller-manager-66b857b88f-smrfq\" (UID: \"1dec596a-69b2-4ca4-8529-d3f5faabc0b0\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.205444 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prgn6\" (UniqueName: \"kubernetes.io/projected/26208de0-aaf1-47c9-80bd-71ed7b659d40-kube-api-access-prgn6\") pod \"controller-manager-658f9978cf-9jd8z\" (UID: \"26208de0-aaf1-47c9-80bd-71ed7b659d40\") " pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.205489 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwhcf\" (UniqueName: \"kubernetes.io/projected/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-kube-api-access-gwhcf\") pod \"route-controller-manager-66b857b88f-smrfq\" (UID: \"1dec596a-69b2-4ca4-8529-d3f5faabc0b0\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.206937 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-client-ca\") pod \"route-controller-manager-66b857b88f-smrfq\" (UID: \"1dec596a-69b2-4ca4-8529-d3f5faabc0b0\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.207097 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26208de0-aaf1-47c9-80bd-71ed7b659d40-proxy-ca-bundles\") pod \"controller-manager-658f9978cf-9jd8z\" (UID: \"26208de0-aaf1-47c9-80bd-71ed7b659d40\") " pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.207324 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26208de0-aaf1-47c9-80bd-71ed7b659d40-client-ca\") pod \"controller-manager-658f9978cf-9jd8z\" (UID: \"26208de0-aaf1-47c9-80bd-71ed7b659d40\") " pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.207908 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-config\") pod \"route-controller-manager-66b857b88f-smrfq\" (UID: \"1dec596a-69b2-4ca4-8529-d3f5faabc0b0\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.208136 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26208de0-aaf1-47c9-80bd-71ed7b659d40-config\") pod \"controller-manager-658f9978cf-9jd8z\" (UID: \"26208de0-aaf1-47c9-80bd-71ed7b659d40\") " pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.214086 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26208de0-aaf1-47c9-80bd-71ed7b659d40-serving-cert\") pod \"controller-manager-658f9978cf-9jd8z\" (UID: \"26208de0-aaf1-47c9-80bd-71ed7b659d40\") " pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.226503 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-serving-cert\") pod \"route-controller-manager-66b857b88f-smrfq\" (UID: \"1dec596a-69b2-4ca4-8529-d3f5faabc0b0\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.234386 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prgn6\" (UniqueName: \"kubernetes.io/projected/26208de0-aaf1-47c9-80bd-71ed7b659d40-kube-api-access-prgn6\") pod \"controller-manager-658f9978cf-9jd8z\" (UID: \"26208de0-aaf1-47c9-80bd-71ed7b659d40\") " pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.235383 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwhcf\" (UniqueName: \"kubernetes.io/projected/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-kube-api-access-gwhcf\") pod \"route-controller-manager-66b857b88f-smrfq\" (UID: \"1dec596a-69b2-4ca4-8529-d3f5faabc0b0\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.250881 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.285387 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.567536 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-658f9978cf-9jd8z"] Feb 27 18:50:22 crc kubenswrapper[4981]: W0227 18:50:22.572800 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26208de0_aaf1_47c9_80bd_71ed7b659d40.slice/crio-0f4bfe6d5e0543404b4e46027b03ad5db4265c79aa36985ebfac510939960c1c WatchSource:0}: Error finding container 0f4bfe6d5e0543404b4e46027b03ad5db4265c79aa36985ebfac510939960c1c: Status 404 returned error can't find the container with id 0f4bfe6d5e0543404b4e46027b03ad5db4265c79aa36985ebfac510939960c1c Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.736538 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq"] Feb 27 18:50:22 crc kubenswrapper[4981]: W0227 18:50:22.745487 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1dec596a_69b2_4ca4_8529_d3f5faabc0b0.slice/crio-2144a18c3b9e8b0b3347d86de2cc379912ff885ad8fffa75b216e0f2d1335fb4 WatchSource:0}: Error finding container 2144a18c3b9e8b0b3347d86de2cc379912ff885ad8fffa75b216e0f2d1335fb4: Status 404 returned error can't find the container with id 2144a18c3b9e8b0b3347d86de2cc379912ff885ad8fffa75b216e0f2d1335fb4 Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.980443 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" event={"ID":"1dec596a-69b2-4ca4-8529-d3f5faabc0b0","Type":"ContainerStarted","Data":"edb02fe882b954caaa363543273fab3d1de1f976b731eb438921ddcfa041300e"} Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.980725 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.980740 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" event={"ID":"1dec596a-69b2-4ca4-8529-d3f5faabc0b0","Type":"ContainerStarted","Data":"2144a18c3b9e8b0b3347d86de2cc379912ff885ad8fffa75b216e0f2d1335fb4"} Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.981914 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" event={"ID":"26208de0-aaf1-47c9-80bd-71ed7b659d40","Type":"ContainerStarted","Data":"9c501723bab074cedc22bcf328a32e036b484023705ffbef90beef21b474c4df"} Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.981935 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" event={"ID":"26208de0-aaf1-47c9-80bd-71ed7b659d40","Type":"ContainerStarted","Data":"0f4bfe6d5e0543404b4e46027b03ad5db4265c79aa36985ebfac510939960c1c"} Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.982097 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.983080 4981 patch_prober.go:28] interesting pod/route-controller-manager-66b857b88f-smrfq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.983134 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" podUID="1dec596a-69b2-4ca4-8529-d3f5faabc0b0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Feb 27 18:50:22 crc kubenswrapper[4981]: I0227 18:50:22.994241 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:23 crc kubenswrapper[4981]: I0227 18:50:23.001074 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" podStartSLOduration=3.001042745 podStartE2EDuration="3.001042745s" podCreationTimestamp="2026-02-27 18:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:50:22.999846137 +0000 UTC m=+322.478627297" watchObservedRunningTime="2026-02-27 18:50:23.001042745 +0000 UTC m=+322.479823905" Feb 27 18:50:23 crc kubenswrapper[4981]: I0227 18:50:23.031274 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" podStartSLOduration=3.031256958 podStartE2EDuration="3.031256958s" podCreationTimestamp="2026-02-27 18:50:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:50:23.028648527 +0000 UTC m=+322.507429687" watchObservedRunningTime="2026-02-27 18:50:23.031256958 +0000 UTC m=+322.510038128" Feb 27 18:50:23 crc kubenswrapper[4981]: I0227 18:50:23.647431 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15" path="/var/lib/kubelet/pods/a73fa5a3-8ed3-4f6d-99be-b7b3083a9a15/volumes" Feb 27 18:50:23 crc kubenswrapper[4981]: I0227 18:50:23.648114 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5c7be0b-ae01-4409-979a-0c6df564767a" path="/var/lib/kubelet/pods/b5c7be0b-ae01-4409-979a-0c6df564767a/volumes" Feb 27 18:50:23 crc kubenswrapper[4981]: I0227 18:50:23.995369 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" Feb 27 18:50:28 crc kubenswrapper[4981]: I0227 18:50:28.628223 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536970-cm5kp" Feb 27 18:50:28 crc kubenswrapper[4981]: I0227 18:50:28.629346 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536970-cm5kp" Feb 27 18:50:29 crc kubenswrapper[4981]: I0227 18:50:29.117235 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536970-cm5kp"] Feb 27 18:50:29 crc kubenswrapper[4981]: W0227 18:50:29.123808 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79d53422_a900_419e_8027_602fa5b1401f.slice/crio-38fa09f7f1deca86fc9f7eca96c2c7b90c2e4ef1bb12c6d0b8deed7995aff67b WatchSource:0}: Error finding container 38fa09f7f1deca86fc9f7eca96c2c7b90c2e4ef1bb12c6d0b8deed7995aff67b: Status 404 returned error can't find the container with id 38fa09f7f1deca86fc9f7eca96c2c7b90c2e4ef1bb12c6d0b8deed7995aff67b Feb 27 18:50:30 crc kubenswrapper[4981]: I0227 18:50:30.032040 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536970-cm5kp" event={"ID":"79d53422-a900-419e-8027-602fa5b1401f","Type":"ContainerStarted","Data":"38fa09f7f1deca86fc9f7eca96c2c7b90c2e4ef1bb12c6d0b8deed7995aff67b"} Feb 27 18:50:30 crc kubenswrapper[4981]: I0227 18:50:30.627776 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:30 crc kubenswrapper[4981]: I0227 18:50:30.628760 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:31 crc kubenswrapper[4981]: I0227 18:50:31.041043 4981 generic.go:334] "Generic (PLEG): container finished" podID="79d53422-a900-419e-8027-602fa5b1401f" containerID="8b3186e9c59609c7476c71ccd23c8acfb64733bc1763c14cdb0a0dc4efc5772f" exitCode=0 Feb 27 18:50:31 crc kubenswrapper[4981]: I0227 18:50:31.041207 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536970-cm5kp" event={"ID":"79d53422-a900-419e-8027-602fa5b1401f","Type":"ContainerDied","Data":"8b3186e9c59609c7476c71ccd23c8acfb64733bc1763c14cdb0a0dc4efc5772f"} Feb 27 18:50:31 crc kubenswrapper[4981]: I0227 18:50:31.107260 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5b9d67559d-jwcvt"] Feb 27 18:50:31 crc kubenswrapper[4981]: W0227 18:50:31.115365 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33e9e9a0_6a17_4374_b89e_d00c9d1d794c.slice/crio-f5733d92fdc4b5e96118002b3bf415a82334b10a2f6b92c4fdb6e1e5c6b08513 WatchSource:0}: Error finding container f5733d92fdc4b5e96118002b3bf415a82334b10a2f6b92c4fdb6e1e5c6b08513: Status 404 returned error can't find the container with id f5733d92fdc4b5e96118002b3bf415a82334b10a2f6b92c4fdb6e1e5c6b08513 Feb 27 18:50:32 crc kubenswrapper[4981]: I0227 18:50:32.052278 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" event={"ID":"33e9e9a0-6a17-4374-b89e-d00c9d1d794c","Type":"ContainerStarted","Data":"cb7da48e1b5b34fab2f9371792eb68543cef614da863febc247a1411f4f98910"} Feb 27 18:50:32 crc kubenswrapper[4981]: I0227 18:50:32.052650 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" event={"ID":"33e9e9a0-6a17-4374-b89e-d00c9d1d794c","Type":"ContainerStarted","Data":"f5733d92fdc4b5e96118002b3bf415a82334b10a2f6b92c4fdb6e1e5c6b08513"} Feb 27 18:50:32 crc kubenswrapper[4981]: I0227 18:50:32.077816 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" podStartSLOduration=68.077794017 podStartE2EDuration="1m8.077794017s" podCreationTimestamp="2026-02-27 18:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:50:32.074533279 +0000 UTC m=+331.553314439" watchObservedRunningTime="2026-02-27 18:50:32.077794017 +0000 UTC m=+331.556575177" Feb 27 18:50:32 crc kubenswrapper[4981]: I0227 18:50:32.493100 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536970-cm5kp" Feb 27 18:50:32 crc kubenswrapper[4981]: I0227 18:50:32.656999 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjdj7\" (UniqueName: \"kubernetes.io/projected/79d53422-a900-419e-8027-602fa5b1401f-kube-api-access-pjdj7\") pod \"79d53422-a900-419e-8027-602fa5b1401f\" (UID: \"79d53422-a900-419e-8027-602fa5b1401f\") " Feb 27 18:50:32 crc kubenswrapper[4981]: I0227 18:50:32.667850 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d53422-a900-419e-8027-602fa5b1401f-kube-api-access-pjdj7" (OuterVolumeSpecName: "kube-api-access-pjdj7") pod "79d53422-a900-419e-8027-602fa5b1401f" (UID: "79d53422-a900-419e-8027-602fa5b1401f"). InnerVolumeSpecName "kube-api-access-pjdj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:50:32 crc kubenswrapper[4981]: I0227 18:50:32.760326 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjdj7\" (UniqueName: \"kubernetes.io/projected/79d53422-a900-419e-8027-602fa5b1401f-kube-api-access-pjdj7\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:33 crc kubenswrapper[4981]: I0227 18:50:33.069422 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536970-cm5kp" event={"ID":"79d53422-a900-419e-8027-602fa5b1401f","Type":"ContainerDied","Data":"38fa09f7f1deca86fc9f7eca96c2c7b90c2e4ef1bb12c6d0b8deed7995aff67b"} Feb 27 18:50:33 crc kubenswrapper[4981]: I0227 18:50:33.069465 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38fa09f7f1deca86fc9f7eca96c2c7b90c2e4ef1bb12c6d0b8deed7995aff67b" Feb 27 18:50:33 crc kubenswrapper[4981]: I0227 18:50:33.069485 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536970-cm5kp" Feb 27 18:50:33 crc kubenswrapper[4981]: I0227 18:50:33.069807 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:33 crc kubenswrapper[4981]: I0227 18:50:33.079286 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5b9d67559d-jwcvt" Feb 27 18:50:39 crc kubenswrapper[4981]: I0227 18:50:39.666944 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rhmlr"] Feb 27 18:50:39 crc kubenswrapper[4981]: I0227 18:50:39.667919 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rhmlr" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" containerName="registry-server" containerID="cri-o://4424a1278a85eb0c50c90f476eb0e65ac5855eadcc51c5c93d4f085a031e396d" gracePeriod=2 Feb 27 18:50:39 crc kubenswrapper[4981]: I0227 18:50:39.866342 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rvh5h"] Feb 27 18:50:39 crc kubenswrapper[4981]: I0227 18:50:39.866983 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rvh5h" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" containerName="registry-server" containerID="cri-o://7c93d4dfa7fd03d89d70b79495c91604d04f8f75a09d680bcdcfd30250b36f68" gracePeriod=2 Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.129994 4981 generic.go:334] "Generic (PLEG): container finished" podID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" containerID="7c93d4dfa7fd03d89d70b79495c91604d04f8f75a09d680bcdcfd30250b36f68" exitCode=0 Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.130076 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvh5h" event={"ID":"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3","Type":"ContainerDied","Data":"7c93d4dfa7fd03d89d70b79495c91604d04f8f75a09d680bcdcfd30250b36f68"} Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.132135 4981 generic.go:334] "Generic (PLEG): container finished" podID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" containerID="4424a1278a85eb0c50c90f476eb0e65ac5855eadcc51c5c93d4f085a031e396d" exitCode=0 Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.132171 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rhmlr" event={"ID":"31a25fb4-5131-45cb-a965-eebe7bcf6a5d","Type":"ContainerDied","Data":"4424a1278a85eb0c50c90f476eb0e65ac5855eadcc51c5c93d4f085a031e396d"} Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.212422 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rhmlr" Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.265223 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31a25fb4-5131-45cb-a965-eebe7bcf6a5d-utilities\") pod \"31a25fb4-5131-45cb-a965-eebe7bcf6a5d\" (UID: \"31a25fb4-5131-45cb-a965-eebe7bcf6a5d\") " Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.265287 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cckb\" (UniqueName: \"kubernetes.io/projected/31a25fb4-5131-45cb-a965-eebe7bcf6a5d-kube-api-access-4cckb\") pod \"31a25fb4-5131-45cb-a965-eebe7bcf6a5d\" (UID: \"31a25fb4-5131-45cb-a965-eebe7bcf6a5d\") " Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.265452 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31a25fb4-5131-45cb-a965-eebe7bcf6a5d-catalog-content\") pod \"31a25fb4-5131-45cb-a965-eebe7bcf6a5d\" (UID: \"31a25fb4-5131-45cb-a965-eebe7bcf6a5d\") " Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.266156 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31a25fb4-5131-45cb-a965-eebe7bcf6a5d-utilities" (OuterVolumeSpecName: "utilities") pod "31a25fb4-5131-45cb-a965-eebe7bcf6a5d" (UID: "31a25fb4-5131-45cb-a965-eebe7bcf6a5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.272949 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31a25fb4-5131-45cb-a965-eebe7bcf6a5d-kube-api-access-4cckb" (OuterVolumeSpecName: "kube-api-access-4cckb") pod "31a25fb4-5131-45cb-a965-eebe7bcf6a5d" (UID: "31a25fb4-5131-45cb-a965-eebe7bcf6a5d"). InnerVolumeSpecName "kube-api-access-4cckb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.278914 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31a25fb4-5131-45cb-a965-eebe7bcf6a5d-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.279195 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cckb\" (UniqueName: \"kubernetes.io/projected/31a25fb4-5131-45cb-a965-eebe7bcf6a5d-kube-api-access-4cckb\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:40 crc kubenswrapper[4981]: E0227 18:50:40.286456 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c93d4dfa7fd03d89d70b79495c91604d04f8f75a09d680bcdcfd30250b36f68 is running failed: container process not found" containerID="7c93d4dfa7fd03d89d70b79495c91604d04f8f75a09d680bcdcfd30250b36f68" cmd=["grpc_health_probe","-addr=:50051"] Feb 27 18:50:40 crc kubenswrapper[4981]: E0227 18:50:40.286989 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c93d4dfa7fd03d89d70b79495c91604d04f8f75a09d680bcdcfd30250b36f68 is running failed: container process not found" containerID="7c93d4dfa7fd03d89d70b79495c91604d04f8f75a09d680bcdcfd30250b36f68" cmd=["grpc_health_probe","-addr=:50051"] Feb 27 18:50:40 crc kubenswrapper[4981]: E0227 18:50:40.287431 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c93d4dfa7fd03d89d70b79495c91604d04f8f75a09d680bcdcfd30250b36f68 is running failed: container process not found" containerID="7c93d4dfa7fd03d89d70b79495c91604d04f8f75a09d680bcdcfd30250b36f68" cmd=["grpc_health_probe","-addr=:50051"] Feb 27 18:50:40 crc kubenswrapper[4981]: E0227 18:50:40.287672 4981 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7c93d4dfa7fd03d89d70b79495c91604d04f8f75a09d680bcdcfd30250b36f68 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-rvh5h" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" containerName="registry-server" Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.321166 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31a25fb4-5131-45cb-a965-eebe7bcf6a5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31a25fb4-5131-45cb-a965-eebe7bcf6a5d" (UID: "31a25fb4-5131-45cb-a965-eebe7bcf6a5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.324967 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvh5h" Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.380409 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3-catalog-content\") pod \"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3\" (UID: \"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3\") " Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.380479 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3-utilities\") pod \"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3\" (UID: \"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3\") " Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.380529 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vg7g\" (UniqueName: \"kubernetes.io/projected/5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3-kube-api-access-7vg7g\") pod \"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3\" (UID: \"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3\") " Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.380821 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31a25fb4-5131-45cb-a965-eebe7bcf6a5d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.382243 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3-utilities" (OuterVolumeSpecName: "utilities") pod "5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" (UID: "5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.383114 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3-kube-api-access-7vg7g" (OuterVolumeSpecName: "kube-api-access-7vg7g") pod "5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" (UID: "5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3"). InnerVolumeSpecName "kube-api-access-7vg7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.462126 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" (UID: "5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.481933 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.482080 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.482161 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vg7g\" (UniqueName: \"kubernetes.io/projected/5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3-kube-api-access-7vg7g\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.679363 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-658f9978cf-9jd8z"] Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.679558 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" podUID="26208de0-aaf1-47c9-80bd-71ed7b659d40" containerName="controller-manager" containerID="cri-o://9c501723bab074cedc22bcf328a32e036b484023705ffbef90beef21b474c4df" gracePeriod=30 Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.706225 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq"] Feb 27 18:50:40 crc kubenswrapper[4981]: I0227 18:50:40.706507 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" podUID="1dec596a-69b2-4ca4-8529-d3f5faabc0b0" containerName="route-controller-manager" containerID="cri-o://edb02fe882b954caaa363543273fab3d1de1f976b731eb438921ddcfa041300e" gracePeriod=30 Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.152941 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rhmlr" event={"ID":"31a25fb4-5131-45cb-a965-eebe7bcf6a5d","Type":"ContainerDied","Data":"ae3de0f3d76ae351393a25e06d3c5341bf0b5813364a807a8b92cd398acad420"} Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.153315 4981 scope.go:117] "RemoveContainer" containerID="4424a1278a85eb0c50c90f476eb0e65ac5855eadcc51c5c93d4f085a031e396d" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.152948 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rhmlr" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.159739 4981 generic.go:334] "Generic (PLEG): container finished" podID="1dec596a-69b2-4ca4-8529-d3f5faabc0b0" containerID="edb02fe882b954caaa363543273fab3d1de1f976b731eb438921ddcfa041300e" exitCode=0 Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.159825 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" event={"ID":"1dec596a-69b2-4ca4-8529-d3f5faabc0b0","Type":"ContainerDied","Data":"edb02fe882b954caaa363543273fab3d1de1f976b731eb438921ddcfa041300e"} Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.161704 4981 generic.go:334] "Generic (PLEG): container finished" podID="26208de0-aaf1-47c9-80bd-71ed7b659d40" containerID="9c501723bab074cedc22bcf328a32e036b484023705ffbef90beef21b474c4df" exitCode=0 Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.161797 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" event={"ID":"26208de0-aaf1-47c9-80bd-71ed7b659d40","Type":"ContainerDied","Data":"9c501723bab074cedc22bcf328a32e036b484023705ffbef90beef21b474c4df"} Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.171368 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvh5h" event={"ID":"5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3","Type":"ContainerDied","Data":"260e200f77b7d050bc98b85e3e9ba8dedee4f4294c2cd6bdd95b11dd5b190e83"} Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.171494 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvh5h" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.176609 4981 scope.go:117] "RemoveContainer" containerID="ba967cc3360c157115cac8468bad3e1b6085e834ff71572f96c74e414f02fb32" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.199410 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rhmlr"] Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.202770 4981 scope.go:117] "RemoveContainer" containerID="c02904ebbf79c49323ebdb9533eef83b081fdc3659409d21123f959212468b81" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.203518 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rhmlr"] Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.219727 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rvh5h"] Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.223808 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rvh5h"] Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.244761 4981 scope.go:117] "RemoveContainer" containerID="7c93d4dfa7fd03d89d70b79495c91604d04f8f75a09d680bcdcfd30250b36f68" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.262909 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.266788 4981 scope.go:117] "RemoveContainer" containerID="3ee26685cf416571f2bd3a77b2535c8a8394c823cc487b19edec6d98f5947177" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.294583 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-config\") pod \"1dec596a-69b2-4ca4-8529-d3f5faabc0b0\" (UID: \"1dec596a-69b2-4ca4-8529-d3f5faabc0b0\") " Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.294658 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwhcf\" (UniqueName: \"kubernetes.io/projected/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-kube-api-access-gwhcf\") pod \"1dec596a-69b2-4ca4-8529-d3f5faabc0b0\" (UID: \"1dec596a-69b2-4ca4-8529-d3f5faabc0b0\") " Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.294719 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-serving-cert\") pod \"1dec596a-69b2-4ca4-8529-d3f5faabc0b0\" (UID: \"1dec596a-69b2-4ca4-8529-d3f5faabc0b0\") " Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.294746 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-client-ca\") pod \"1dec596a-69b2-4ca4-8529-d3f5faabc0b0\" (UID: \"1dec596a-69b2-4ca4-8529-d3f5faabc0b0\") " Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.295965 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-client-ca" (OuterVolumeSpecName: "client-ca") pod "1dec596a-69b2-4ca4-8529-d3f5faabc0b0" (UID: "1dec596a-69b2-4ca4-8529-d3f5faabc0b0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.296580 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-config" (OuterVolumeSpecName: "config") pod "1dec596a-69b2-4ca4-8529-d3f5faabc0b0" (UID: "1dec596a-69b2-4ca4-8529-d3f5faabc0b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.300917 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-kube-api-access-gwhcf" (OuterVolumeSpecName: "kube-api-access-gwhcf") pod "1dec596a-69b2-4ca4-8529-d3f5faabc0b0" (UID: "1dec596a-69b2-4ca4-8529-d3f5faabc0b0"). InnerVolumeSpecName "kube-api-access-gwhcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.302677 4981 scope.go:117] "RemoveContainer" containerID="02f9a36df7feb437b761c11c80b13ef63110c395ee32cac04446b9defa8e2922" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.303713 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1dec596a-69b2-4ca4-8529-d3f5faabc0b0" (UID: "1dec596a-69b2-4ca4-8529-d3f5faabc0b0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.366603 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.397318 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26208de0-aaf1-47c9-80bd-71ed7b659d40-config\") pod \"26208de0-aaf1-47c9-80bd-71ed7b659d40\" (UID: \"26208de0-aaf1-47c9-80bd-71ed7b659d40\") " Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.397367 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26208de0-aaf1-47c9-80bd-71ed7b659d40-proxy-ca-bundles\") pod \"26208de0-aaf1-47c9-80bd-71ed7b659d40\" (UID: \"26208de0-aaf1-47c9-80bd-71ed7b659d40\") " Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.397403 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26208de0-aaf1-47c9-80bd-71ed7b659d40-client-ca\") pod \"26208de0-aaf1-47c9-80bd-71ed7b659d40\" (UID: \"26208de0-aaf1-47c9-80bd-71ed7b659d40\") " Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.397466 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prgn6\" (UniqueName: \"kubernetes.io/projected/26208de0-aaf1-47c9-80bd-71ed7b659d40-kube-api-access-prgn6\") pod \"26208de0-aaf1-47c9-80bd-71ed7b659d40\" (UID: \"26208de0-aaf1-47c9-80bd-71ed7b659d40\") " Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.397501 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26208de0-aaf1-47c9-80bd-71ed7b659d40-serving-cert\") pod \"26208de0-aaf1-47c9-80bd-71ed7b659d40\" (UID: \"26208de0-aaf1-47c9-80bd-71ed7b659d40\") " Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.398539 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.398574 4981 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.398587 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.398602 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwhcf\" (UniqueName: \"kubernetes.io/projected/1dec596a-69b2-4ca4-8529-d3f5faabc0b0-kube-api-access-gwhcf\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.399166 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26208de0-aaf1-47c9-80bd-71ed7b659d40-client-ca" (OuterVolumeSpecName: "client-ca") pod "26208de0-aaf1-47c9-80bd-71ed7b659d40" (UID: "26208de0-aaf1-47c9-80bd-71ed7b659d40"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.399288 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26208de0-aaf1-47c9-80bd-71ed7b659d40-config" (OuterVolumeSpecName: "config") pod "26208de0-aaf1-47c9-80bd-71ed7b659d40" (UID: "26208de0-aaf1-47c9-80bd-71ed7b659d40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.399845 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26208de0-aaf1-47c9-80bd-71ed7b659d40-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "26208de0-aaf1-47c9-80bd-71ed7b659d40" (UID: "26208de0-aaf1-47c9-80bd-71ed7b659d40"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.401418 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26208de0-aaf1-47c9-80bd-71ed7b659d40-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "26208de0-aaf1-47c9-80bd-71ed7b659d40" (UID: "26208de0-aaf1-47c9-80bd-71ed7b659d40"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.402255 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26208de0-aaf1-47c9-80bd-71ed7b659d40-kube-api-access-prgn6" (OuterVolumeSpecName: "kube-api-access-prgn6") pod "26208de0-aaf1-47c9-80bd-71ed7b659d40" (UID: "26208de0-aaf1-47c9-80bd-71ed7b659d40"). InnerVolumeSpecName "kube-api-access-prgn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.499461 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prgn6\" (UniqueName: \"kubernetes.io/projected/26208de0-aaf1-47c9-80bd-71ed7b659d40-kube-api-access-prgn6\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.499516 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26208de0-aaf1-47c9-80bd-71ed7b659d40-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.499537 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26208de0-aaf1-47c9-80bd-71ed7b659d40-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.499555 4981 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/26208de0-aaf1-47c9-80bd-71ed7b659d40-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.499572 4981 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26208de0-aaf1-47c9-80bd-71ed7b659d40-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.642245 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" path="/var/lib/kubelet/pods/31a25fb4-5131-45cb-a965-eebe7bcf6a5d/volumes" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.643836 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" path="/var/lib/kubelet/pods/5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3/volumes" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.934009 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr"] Feb 27 18:50:41 crc kubenswrapper[4981]: E0227 18:50:41.934407 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d53422-a900-419e-8027-602fa5b1401f" containerName="oc" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.934429 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d53422-a900-419e-8027-602fa5b1401f" containerName="oc" Feb 27 18:50:41 crc kubenswrapper[4981]: E0227 18:50:41.934455 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" containerName="registry-server" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.934467 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" containerName="registry-server" Feb 27 18:50:41 crc kubenswrapper[4981]: E0227 18:50:41.934483 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" containerName="extract-utilities" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.934498 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" containerName="extract-utilities" Feb 27 18:50:41 crc kubenswrapper[4981]: E0227 18:50:41.934519 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" containerName="registry-server" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.934532 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" containerName="registry-server" Feb 27 18:50:41 crc kubenswrapper[4981]: E0227 18:50:41.934551 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26208de0-aaf1-47c9-80bd-71ed7b659d40" containerName="controller-manager" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.934562 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="26208de0-aaf1-47c9-80bd-71ed7b659d40" containerName="controller-manager" Feb 27 18:50:41 crc kubenswrapper[4981]: E0227 18:50:41.934580 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" containerName="extract-utilities" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.934592 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" containerName="extract-utilities" Feb 27 18:50:41 crc kubenswrapper[4981]: E0227 18:50:41.934610 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" containerName="extract-content" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.934623 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" containerName="extract-content" Feb 27 18:50:41 crc kubenswrapper[4981]: E0227 18:50:41.934651 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dec596a-69b2-4ca4-8529-d3f5faabc0b0" containerName="route-controller-manager" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.934663 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dec596a-69b2-4ca4-8529-d3f5faabc0b0" containerName="route-controller-manager" Feb 27 18:50:41 crc kubenswrapper[4981]: E0227 18:50:41.934679 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" containerName="extract-content" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.934692 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" containerName="extract-content" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.934901 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d53422-a900-419e-8027-602fa5b1401f" containerName="oc" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.934926 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="31a25fb4-5131-45cb-a965-eebe7bcf6a5d" containerName="registry-server" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.934954 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dec596a-69b2-4ca4-8529-d3f5faabc0b0" containerName="route-controller-manager" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.934978 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f07d803-2ad1-4f1c-b8dd-d0ffe97335c3" containerName="registry-server" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.935007 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="26208de0-aaf1-47c9-80bd-71ed7b659d40" containerName="controller-manager" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.935746 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.941855 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-597c79bdbb-bd7qb"] Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.946167 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.960841 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr"] Feb 27 18:50:41 crc kubenswrapper[4981]: I0227 18:50:41.980534 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-597c79bdbb-bd7qb"] Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.005267 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d6ffbc4-17da-44ee-8e00-66601141abd7-config\") pod \"controller-manager-597c79bdbb-bd7qb\" (UID: \"0d6ffbc4-17da-44ee-8e00-66601141abd7\") " pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.005371 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-config\") pod \"route-controller-manager-84f5659d8d-c57kr\" (UID: \"bda1c106-87ee-4d5a-b83f-1670a89f9f8c\") " pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.005420 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d6ffbc4-17da-44ee-8e00-66601141abd7-client-ca\") pod \"controller-manager-597c79bdbb-bd7qb\" (UID: \"0d6ffbc4-17da-44ee-8e00-66601141abd7\") " pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.005482 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-client-ca\") pod \"route-controller-manager-84f5659d8d-c57kr\" (UID: \"bda1c106-87ee-4d5a-b83f-1670a89f9f8c\") " pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.005535 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw42x\" (UniqueName: \"kubernetes.io/projected/0d6ffbc4-17da-44ee-8e00-66601141abd7-kube-api-access-sw42x\") pod \"controller-manager-597c79bdbb-bd7qb\" (UID: \"0d6ffbc4-17da-44ee-8e00-66601141abd7\") " pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.005572 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-serving-cert\") pod \"route-controller-manager-84f5659d8d-c57kr\" (UID: \"bda1c106-87ee-4d5a-b83f-1670a89f9f8c\") " pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.005606 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d6ffbc4-17da-44ee-8e00-66601141abd7-proxy-ca-bundles\") pod \"controller-manager-597c79bdbb-bd7qb\" (UID: \"0d6ffbc4-17da-44ee-8e00-66601141abd7\") " pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.005648 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d6ffbc4-17da-44ee-8e00-66601141abd7-serving-cert\") pod \"controller-manager-597c79bdbb-bd7qb\" (UID: \"0d6ffbc4-17da-44ee-8e00-66601141abd7\") " pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.005717 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p5bc\" (UniqueName: \"kubernetes.io/projected/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-kube-api-access-2p5bc\") pod \"route-controller-manager-84f5659d8d-c57kr\" (UID: \"bda1c106-87ee-4d5a-b83f-1670a89f9f8c\") " pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.064441 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6w9qb"] Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.064927 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6w9qb" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" containerName="registry-server" containerID="cri-o://e0c0aecf3978cf4d63033b6ed5f49c27e8b9e3d5ee2e8a145c81d8b06d336d73" gracePeriod=2 Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.106837 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p5bc\" (UniqueName: \"kubernetes.io/projected/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-kube-api-access-2p5bc\") pod \"route-controller-manager-84f5659d8d-c57kr\" (UID: \"bda1c106-87ee-4d5a-b83f-1670a89f9f8c\") " pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.107246 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d6ffbc4-17da-44ee-8e00-66601141abd7-config\") pod \"controller-manager-597c79bdbb-bd7qb\" (UID: \"0d6ffbc4-17da-44ee-8e00-66601141abd7\") " pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.107311 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-config\") pod \"route-controller-manager-84f5659d8d-c57kr\" (UID: \"bda1c106-87ee-4d5a-b83f-1670a89f9f8c\") " pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.107360 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d6ffbc4-17da-44ee-8e00-66601141abd7-client-ca\") pod \"controller-manager-597c79bdbb-bd7qb\" (UID: \"0d6ffbc4-17da-44ee-8e00-66601141abd7\") " pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.107403 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-client-ca\") pod \"route-controller-manager-84f5659d8d-c57kr\" (UID: \"bda1c106-87ee-4d5a-b83f-1670a89f9f8c\") " pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.107454 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw42x\" (UniqueName: \"kubernetes.io/projected/0d6ffbc4-17da-44ee-8e00-66601141abd7-kube-api-access-sw42x\") pod \"controller-manager-597c79bdbb-bd7qb\" (UID: \"0d6ffbc4-17da-44ee-8e00-66601141abd7\") " pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.107491 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-serving-cert\") pod \"route-controller-manager-84f5659d8d-c57kr\" (UID: \"bda1c106-87ee-4d5a-b83f-1670a89f9f8c\") " pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.107527 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d6ffbc4-17da-44ee-8e00-66601141abd7-proxy-ca-bundles\") pod \"controller-manager-597c79bdbb-bd7qb\" (UID: \"0d6ffbc4-17da-44ee-8e00-66601141abd7\") " pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.107572 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d6ffbc4-17da-44ee-8e00-66601141abd7-serving-cert\") pod \"controller-manager-597c79bdbb-bd7qb\" (UID: \"0d6ffbc4-17da-44ee-8e00-66601141abd7\") " pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.110395 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d6ffbc4-17da-44ee-8e00-66601141abd7-proxy-ca-bundles\") pod \"controller-manager-597c79bdbb-bd7qb\" (UID: \"0d6ffbc4-17da-44ee-8e00-66601141abd7\") " pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.110825 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-client-ca\") pod \"route-controller-manager-84f5659d8d-c57kr\" (UID: \"bda1c106-87ee-4d5a-b83f-1670a89f9f8c\") " pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.111222 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-config\") pod \"route-controller-manager-84f5659d8d-c57kr\" (UID: \"bda1c106-87ee-4d5a-b83f-1670a89f9f8c\") " pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.121095 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d6ffbc4-17da-44ee-8e00-66601141abd7-config\") pod \"controller-manager-597c79bdbb-bd7qb\" (UID: \"0d6ffbc4-17da-44ee-8e00-66601141abd7\") " pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.121124 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d6ffbc4-17da-44ee-8e00-66601141abd7-client-ca\") pod \"controller-manager-597c79bdbb-bd7qb\" (UID: \"0d6ffbc4-17da-44ee-8e00-66601141abd7\") " pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.121339 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d6ffbc4-17da-44ee-8e00-66601141abd7-serving-cert\") pod \"controller-manager-597c79bdbb-bd7qb\" (UID: \"0d6ffbc4-17da-44ee-8e00-66601141abd7\") " pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.121858 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-serving-cert\") pod \"route-controller-manager-84f5659d8d-c57kr\" (UID: \"bda1c106-87ee-4d5a-b83f-1670a89f9f8c\") " pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.138328 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p5bc\" (UniqueName: \"kubernetes.io/projected/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-kube-api-access-2p5bc\") pod \"route-controller-manager-84f5659d8d-c57kr\" (UID: \"bda1c106-87ee-4d5a-b83f-1670a89f9f8c\") " pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.143769 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw42x\" (UniqueName: \"kubernetes.io/projected/0d6ffbc4-17da-44ee-8e00-66601141abd7-kube-api-access-sw42x\") pod \"controller-manager-597c79bdbb-bd7qb\" (UID: \"0d6ffbc4-17da-44ee-8e00-66601141abd7\") " pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.189546 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.189562 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-658f9978cf-9jd8z" event={"ID":"26208de0-aaf1-47c9-80bd-71ed7b659d40","Type":"ContainerDied","Data":"0f4bfe6d5e0543404b4e46027b03ad5db4265c79aa36985ebfac510939960c1c"} Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.189678 4981 scope.go:117] "RemoveContainer" containerID="9c501723bab074cedc22bcf328a32e036b484023705ffbef90beef21b474c4df" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.199287 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" event={"ID":"1dec596a-69b2-4ca4-8529-d3f5faabc0b0","Type":"ContainerDied","Data":"2144a18c3b9e8b0b3347d86de2cc379912ff885ad8fffa75b216e0f2d1335fb4"} Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.199472 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.225019 4981 scope.go:117] "RemoveContainer" containerID="edb02fe882b954caaa363543273fab3d1de1f976b731eb438921ddcfa041300e" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.229304 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-658f9978cf-9jd8z"] Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.232906 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-658f9978cf-9jd8z"] Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.244807 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq"] Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.250039 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66b857b88f-smrfq"] Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.261041 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zrwz4"] Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.261305 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zrwz4" podUID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" containerName="registry-server" containerID="cri-o://db1ebaf4598a782c9c5b2168db854dbe4ea8aa0bb9e16d47cac86714bc064f6f" gracePeriod=2 Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.281347 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.289910 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:50:42 crc kubenswrapper[4981]: E0227 18:50:42.297672 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e0c0aecf3978cf4d63033b6ed5f49c27e8b9e3d5ee2e8a145c81d8b06d336d73 is running failed: container process not found" containerID="e0c0aecf3978cf4d63033b6ed5f49c27e8b9e3d5ee2e8a145c81d8b06d336d73" cmd=["grpc_health_probe","-addr=:50051"] Feb 27 18:50:42 crc kubenswrapper[4981]: E0227 18:50:42.299540 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e0c0aecf3978cf4d63033b6ed5f49c27e8b9e3d5ee2e8a145c81d8b06d336d73 is running failed: container process not found" containerID="e0c0aecf3978cf4d63033b6ed5f49c27e8b9e3d5ee2e8a145c81d8b06d336d73" cmd=["grpc_health_probe","-addr=:50051"] Feb 27 18:50:42 crc kubenswrapper[4981]: E0227 18:50:42.299851 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e0c0aecf3978cf4d63033b6ed5f49c27e8b9e3d5ee2e8a145c81d8b06d336d73 is running failed: container process not found" containerID="e0c0aecf3978cf4d63033b6ed5f49c27e8b9e3d5ee2e8a145c81d8b06d336d73" cmd=["grpc_health_probe","-addr=:50051"] Feb 27 18:50:42 crc kubenswrapper[4981]: E0227 18:50:42.299903 4981 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e0c0aecf3978cf4d63033b6ed5f49c27e8b9e3d5ee2e8a145c81d8b06d336d73 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-6w9qb" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" containerName="registry-server" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.427393 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6w9qb" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.511747 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d12f02-fe5f-4ca7-a190-852ad6284190-utilities\") pod \"b0d12f02-fe5f-4ca7-a190-852ad6284190\" (UID: \"b0d12f02-fe5f-4ca7-a190-852ad6284190\") " Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.511864 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drxn7\" (UniqueName: \"kubernetes.io/projected/b0d12f02-fe5f-4ca7-a190-852ad6284190-kube-api-access-drxn7\") pod \"b0d12f02-fe5f-4ca7-a190-852ad6284190\" (UID: \"b0d12f02-fe5f-4ca7-a190-852ad6284190\") " Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.511924 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d12f02-fe5f-4ca7-a190-852ad6284190-catalog-content\") pod \"b0d12f02-fe5f-4ca7-a190-852ad6284190\" (UID: \"b0d12f02-fe5f-4ca7-a190-852ad6284190\") " Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.512853 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0d12f02-fe5f-4ca7-a190-852ad6284190-utilities" (OuterVolumeSpecName: "utilities") pod "b0d12f02-fe5f-4ca7-a190-852ad6284190" (UID: "b0d12f02-fe5f-4ca7-a190-852ad6284190"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.525435 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d12f02-fe5f-4ca7-a190-852ad6284190-kube-api-access-drxn7" (OuterVolumeSpecName: "kube-api-access-drxn7") pod "b0d12f02-fe5f-4ca7-a190-852ad6284190" (UID: "b0d12f02-fe5f-4ca7-a190-852ad6284190"). InnerVolumeSpecName "kube-api-access-drxn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.534692 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0d12f02-fe5f-4ca7-a190-852ad6284190-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0d12f02-fe5f-4ca7-a190-852ad6284190" (UID: "b0d12f02-fe5f-4ca7-a190-852ad6284190"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.613195 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d12f02-fe5f-4ca7-a190-852ad6284190-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.613243 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drxn7\" (UniqueName: \"kubernetes.io/projected/b0d12f02-fe5f-4ca7-a190-852ad6284190-kube-api-access-drxn7\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.613262 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d12f02-fe5f-4ca7-a190-852ad6284190-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.673003 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrwz4" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.714372 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvvdw\" (UniqueName: \"kubernetes.io/projected/fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c-kube-api-access-dvvdw\") pod \"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c\" (UID: \"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c\") " Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.714432 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c-utilities\") pod \"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c\" (UID: \"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c\") " Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.714513 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c-catalog-content\") pod \"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c\" (UID: \"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c\") " Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.715746 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c-utilities" (OuterVolumeSpecName: "utilities") pod "fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" (UID: "fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.717453 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c-kube-api-access-dvvdw" (OuterVolumeSpecName: "kube-api-access-dvvdw") pod "fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" (UID: "fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c"). InnerVolumeSpecName "kube-api-access-dvvdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.732755 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr"] Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.817093 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.817140 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvvdw\" (UniqueName: \"kubernetes.io/projected/fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c-kube-api-access-dvvdw\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.854556 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-597c79bdbb-bd7qb"] Feb 27 18:50:42 crc kubenswrapper[4981]: W0227 18:50:42.855508 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d6ffbc4_17da_44ee_8e00_66601141abd7.slice/crio-e6d918b7442ef45c636d9dfa83413dd7cdc46d802b09d93f46541c4c0fd8e8ff WatchSource:0}: Error finding container e6d918b7442ef45c636d9dfa83413dd7cdc46d802b09d93f46541c4c0fd8e8ff: Status 404 returned error can't find the container with id e6d918b7442ef45c636d9dfa83413dd7cdc46d802b09d93f46541c4c0fd8e8ff Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.857686 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" (UID: "fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:50:42 crc kubenswrapper[4981]: I0227 18:50:42.918761 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.208586 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" event={"ID":"0d6ffbc4-17da-44ee-8e00-66601141abd7","Type":"ContainerStarted","Data":"7c9686296711fbe3ff893c1f590b0841ff9a92aacc41723d6af978383eb22d0c"} Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.208952 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.208978 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" event={"ID":"0d6ffbc4-17da-44ee-8e00-66601141abd7","Type":"ContainerStarted","Data":"e6d918b7442ef45c636d9dfa83413dd7cdc46d802b09d93f46541c4c0fd8e8ff"} Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.212494 4981 generic.go:334] "Generic (PLEG): container finished" podID="b0d12f02-fe5f-4ca7-a190-852ad6284190" containerID="e0c0aecf3978cf4d63033b6ed5f49c27e8b9e3d5ee2e8a145c81d8b06d336d73" exitCode=0 Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.212548 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6w9qb" event={"ID":"b0d12f02-fe5f-4ca7-a190-852ad6284190","Type":"ContainerDied","Data":"e0c0aecf3978cf4d63033b6ed5f49c27e8b9e3d5ee2e8a145c81d8b06d336d73"} Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.212570 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6w9qb" event={"ID":"b0d12f02-fe5f-4ca7-a190-852ad6284190","Type":"ContainerDied","Data":"f9ec8ac2b1c564b12a5e13c0ec4fa4657b9d1a30d7e7bd32e871e1e5fbf0d5b0"} Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.212586 4981 scope.go:117] "RemoveContainer" containerID="e0c0aecf3978cf4d63033b6ed5f49c27e8b9e3d5ee2e8a145c81d8b06d336d73" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.212666 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6w9qb" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.217800 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.218929 4981 generic.go:334] "Generic (PLEG): container finished" podID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" containerID="db1ebaf4598a782c9c5b2168db854dbe4ea8aa0bb9e16d47cac86714bc064f6f" exitCode=0 Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.218981 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrwz4" event={"ID":"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c","Type":"ContainerDied","Data":"db1ebaf4598a782c9c5b2168db854dbe4ea8aa0bb9e16d47cac86714bc064f6f"} Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.219002 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrwz4" event={"ID":"fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c","Type":"ContainerDied","Data":"b0544498d79a13f1021a9f8cfe7c6d39d722ff0ad05da6df5a3496895d05f951"} Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.219069 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrwz4" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.225401 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" event={"ID":"bda1c106-87ee-4d5a-b83f-1670a89f9f8c","Type":"ContainerStarted","Data":"ab3f1da2aa9a3818177cd17d4ee3c62a4dc9ef7bcff2847993e3e30b9944bffc"} Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.225449 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.225461 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" event={"ID":"bda1c106-87ee-4d5a-b83f-1670a89f9f8c","Type":"ContainerStarted","Data":"a6a082697e49081576b37281bc40bdf43f8e603a255945750ebc60c059d98026"} Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.228571 4981 scope.go:117] "RemoveContainer" containerID="0be60c7a1334a61631037ba538dabf35bce47213866ae2630c470804a5160668" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.235731 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" podStartSLOduration=3.235710659 podStartE2EDuration="3.235710659s" podCreationTimestamp="2026-02-27 18:50:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:50:43.230342272 +0000 UTC m=+342.709123462" watchObservedRunningTime="2026-02-27 18:50:43.235710659 +0000 UTC m=+342.714491819" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.256871 4981 scope.go:117] "RemoveContainer" containerID="c5f6787846da151f909ec9bf2d0b15f05fdd8c05b853ff4fb5d8d0d5c909e24c" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.275686 4981 scope.go:117] "RemoveContainer" containerID="e0c0aecf3978cf4d63033b6ed5f49c27e8b9e3d5ee2e8a145c81d8b06d336d73" Feb 27 18:50:43 crc kubenswrapper[4981]: E0227 18:50:43.276127 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0c0aecf3978cf4d63033b6ed5f49c27e8b9e3d5ee2e8a145c81d8b06d336d73\": container with ID starting with e0c0aecf3978cf4d63033b6ed5f49c27e8b9e3d5ee2e8a145c81d8b06d336d73 not found: ID does not exist" containerID="e0c0aecf3978cf4d63033b6ed5f49c27e8b9e3d5ee2e8a145c81d8b06d336d73" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.276180 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0c0aecf3978cf4d63033b6ed5f49c27e8b9e3d5ee2e8a145c81d8b06d336d73"} err="failed to get container status \"e0c0aecf3978cf4d63033b6ed5f49c27e8b9e3d5ee2e8a145c81d8b06d336d73\": rpc error: code = NotFound desc = could not find container \"e0c0aecf3978cf4d63033b6ed5f49c27e8b9e3d5ee2e8a145c81d8b06d336d73\": container with ID starting with e0c0aecf3978cf4d63033b6ed5f49c27e8b9e3d5ee2e8a145c81d8b06d336d73 not found: ID does not exist" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.276211 4981 scope.go:117] "RemoveContainer" containerID="0be60c7a1334a61631037ba538dabf35bce47213866ae2630c470804a5160668" Feb 27 18:50:43 crc kubenswrapper[4981]: E0227 18:50:43.276487 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0be60c7a1334a61631037ba538dabf35bce47213866ae2630c470804a5160668\": container with ID starting with 0be60c7a1334a61631037ba538dabf35bce47213866ae2630c470804a5160668 not found: ID does not exist" containerID="0be60c7a1334a61631037ba538dabf35bce47213866ae2630c470804a5160668" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.276524 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0be60c7a1334a61631037ba538dabf35bce47213866ae2630c470804a5160668"} err="failed to get container status \"0be60c7a1334a61631037ba538dabf35bce47213866ae2630c470804a5160668\": rpc error: code = NotFound desc = could not find container \"0be60c7a1334a61631037ba538dabf35bce47213866ae2630c470804a5160668\": container with ID starting with 0be60c7a1334a61631037ba538dabf35bce47213866ae2630c470804a5160668 not found: ID does not exist" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.276546 4981 scope.go:117] "RemoveContainer" containerID="c5f6787846da151f909ec9bf2d0b15f05fdd8c05b853ff4fb5d8d0d5c909e24c" Feb 27 18:50:43 crc kubenswrapper[4981]: E0227 18:50:43.276773 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5f6787846da151f909ec9bf2d0b15f05fdd8c05b853ff4fb5d8d0d5c909e24c\": container with ID starting with c5f6787846da151f909ec9bf2d0b15f05fdd8c05b853ff4fb5d8d0d5c909e24c not found: ID does not exist" containerID="c5f6787846da151f909ec9bf2d0b15f05fdd8c05b853ff4fb5d8d0d5c909e24c" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.276802 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5f6787846da151f909ec9bf2d0b15f05fdd8c05b853ff4fb5d8d0d5c909e24c"} err="failed to get container status \"c5f6787846da151f909ec9bf2d0b15f05fdd8c05b853ff4fb5d8d0d5c909e24c\": rpc error: code = NotFound desc = could not find container \"c5f6787846da151f909ec9bf2d0b15f05fdd8c05b853ff4fb5d8d0d5c909e24c\": container with ID starting with c5f6787846da151f909ec9bf2d0b15f05fdd8c05b853ff4fb5d8d0d5c909e24c not found: ID does not exist" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.276825 4981 scope.go:117] "RemoveContainer" containerID="db1ebaf4598a782c9c5b2168db854dbe4ea8aa0bb9e16d47cac86714bc064f6f" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.293747 4981 scope.go:117] "RemoveContainer" containerID="62d689abca5da9d5e754132cc0a3392e4a6ee7a2ef87ac714a6c35639a23e231" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.300268 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" podStartSLOduration=3.300252002 podStartE2EDuration="3.300252002s" podCreationTimestamp="2026-02-27 18:50:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:50:43.246723903 +0000 UTC m=+342.725505073" watchObservedRunningTime="2026-02-27 18:50:43.300252002 +0000 UTC m=+342.779033152" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.316171 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6w9qb"] Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.316225 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6w9qb"] Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.339703 4981 scope.go:117] "RemoveContainer" containerID="acc9ac95d1ffba7892b2b8dddcedf82c83e3de0ef658d8c5a4e4c6ccdbfca52d" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.361290 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zrwz4"] Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.369677 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zrwz4"] Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.391392 4981 scope.go:117] "RemoveContainer" containerID="db1ebaf4598a782c9c5b2168db854dbe4ea8aa0bb9e16d47cac86714bc064f6f" Feb 27 18:50:43 crc kubenswrapper[4981]: E0227 18:50:43.393271 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db1ebaf4598a782c9c5b2168db854dbe4ea8aa0bb9e16d47cac86714bc064f6f\": container with ID starting with db1ebaf4598a782c9c5b2168db854dbe4ea8aa0bb9e16d47cac86714bc064f6f not found: ID does not exist" containerID="db1ebaf4598a782c9c5b2168db854dbe4ea8aa0bb9e16d47cac86714bc064f6f" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.393317 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db1ebaf4598a782c9c5b2168db854dbe4ea8aa0bb9e16d47cac86714bc064f6f"} err="failed to get container status \"db1ebaf4598a782c9c5b2168db854dbe4ea8aa0bb9e16d47cac86714bc064f6f\": rpc error: code = NotFound desc = could not find container \"db1ebaf4598a782c9c5b2168db854dbe4ea8aa0bb9e16d47cac86714bc064f6f\": container with ID starting with db1ebaf4598a782c9c5b2168db854dbe4ea8aa0bb9e16d47cac86714bc064f6f not found: ID does not exist" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.393355 4981 scope.go:117] "RemoveContainer" containerID="62d689abca5da9d5e754132cc0a3392e4a6ee7a2ef87ac714a6c35639a23e231" Feb 27 18:50:43 crc kubenswrapper[4981]: E0227 18:50:43.396366 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62d689abca5da9d5e754132cc0a3392e4a6ee7a2ef87ac714a6c35639a23e231\": container with ID starting with 62d689abca5da9d5e754132cc0a3392e4a6ee7a2ef87ac714a6c35639a23e231 not found: ID does not exist" containerID="62d689abca5da9d5e754132cc0a3392e4a6ee7a2ef87ac714a6c35639a23e231" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.396435 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62d689abca5da9d5e754132cc0a3392e4a6ee7a2ef87ac714a6c35639a23e231"} err="failed to get container status \"62d689abca5da9d5e754132cc0a3392e4a6ee7a2ef87ac714a6c35639a23e231\": rpc error: code = NotFound desc = could not find container \"62d689abca5da9d5e754132cc0a3392e4a6ee7a2ef87ac714a6c35639a23e231\": container with ID starting with 62d689abca5da9d5e754132cc0a3392e4a6ee7a2ef87ac714a6c35639a23e231 not found: ID does not exist" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.396484 4981 scope.go:117] "RemoveContainer" containerID="acc9ac95d1ffba7892b2b8dddcedf82c83e3de0ef658d8c5a4e4c6ccdbfca52d" Feb 27 18:50:43 crc kubenswrapper[4981]: E0227 18:50:43.397612 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acc9ac95d1ffba7892b2b8dddcedf82c83e3de0ef658d8c5a4e4c6ccdbfca52d\": container with ID starting with acc9ac95d1ffba7892b2b8dddcedf82c83e3de0ef658d8c5a4e4c6ccdbfca52d not found: ID does not exist" containerID="acc9ac95d1ffba7892b2b8dddcedf82c83e3de0ef658d8c5a4e4c6ccdbfca52d" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.397640 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc9ac95d1ffba7892b2b8dddcedf82c83e3de0ef658d8c5a4e4c6ccdbfca52d"} err="failed to get container status \"acc9ac95d1ffba7892b2b8dddcedf82c83e3de0ef658d8c5a4e4c6ccdbfca52d\": rpc error: code = NotFound desc = could not find container \"acc9ac95d1ffba7892b2b8dddcedf82c83e3de0ef658d8c5a4e4c6ccdbfca52d\": container with ID starting with acc9ac95d1ffba7892b2b8dddcedf82c83e3de0ef658d8c5a4e4c6ccdbfca52d not found: ID does not exist" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.415185 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.635036 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dec596a-69b2-4ca4-8529-d3f5faabc0b0" path="/var/lib/kubelet/pods/1dec596a-69b2-4ca4-8529-d3f5faabc0b0/volumes" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.635857 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26208de0-aaf1-47c9-80bd-71ed7b659d40" path="/var/lib/kubelet/pods/26208de0-aaf1-47c9-80bd-71ed7b659d40/volumes" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.636299 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" path="/var/lib/kubelet/pods/b0d12f02-fe5f-4ca7-a190-852ad6284190/volumes" Feb 27 18:50:43 crc kubenswrapper[4981]: I0227 18:50:43.637421 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" path="/var/lib/kubelet/pods/fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c/volumes" Feb 27 18:51:00 crc kubenswrapper[4981]: I0227 18:51:00.733094 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr"] Feb 27 18:51:00 crc kubenswrapper[4981]: I0227 18:51:00.733799 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" podUID="bda1c106-87ee-4d5a-b83f-1670a89f9f8c" containerName="route-controller-manager" containerID="cri-o://ab3f1da2aa9a3818177cd17d4ee3c62a4dc9ef7bcff2847993e3e30b9944bffc" gracePeriod=30 Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.297336 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.370918 4981 generic.go:334] "Generic (PLEG): container finished" podID="bda1c106-87ee-4d5a-b83f-1670a89f9f8c" containerID="ab3f1da2aa9a3818177cd17d4ee3c62a4dc9ef7bcff2847993e3e30b9944bffc" exitCode=0 Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.370980 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" event={"ID":"bda1c106-87ee-4d5a-b83f-1670a89f9f8c","Type":"ContainerDied","Data":"ab3f1da2aa9a3818177cd17d4ee3c62a4dc9ef7bcff2847993e3e30b9944bffc"} Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.371047 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" event={"ID":"bda1c106-87ee-4d5a-b83f-1670a89f9f8c","Type":"ContainerDied","Data":"a6a082697e49081576b37281bc40bdf43f8e603a255945750ebc60c059d98026"} Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.371105 4981 scope.go:117] "RemoveContainer" containerID="ab3f1da2aa9a3818177cd17d4ee3c62a4dc9ef7bcff2847993e3e30b9944bffc" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.371251 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.400990 4981 scope.go:117] "RemoveContainer" containerID="ab3f1da2aa9a3818177cd17d4ee3c62a4dc9ef7bcff2847993e3e30b9944bffc" Feb 27 18:51:01 crc kubenswrapper[4981]: E0227 18:51:01.402025 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab3f1da2aa9a3818177cd17d4ee3c62a4dc9ef7bcff2847993e3e30b9944bffc\": container with ID starting with ab3f1da2aa9a3818177cd17d4ee3c62a4dc9ef7bcff2847993e3e30b9944bffc not found: ID does not exist" containerID="ab3f1da2aa9a3818177cd17d4ee3c62a4dc9ef7bcff2847993e3e30b9944bffc" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.402123 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3f1da2aa9a3818177cd17d4ee3c62a4dc9ef7bcff2847993e3e30b9944bffc"} err="failed to get container status \"ab3f1da2aa9a3818177cd17d4ee3c62a4dc9ef7bcff2847993e3e30b9944bffc\": rpc error: code = NotFound desc = could not find container \"ab3f1da2aa9a3818177cd17d4ee3c62a4dc9ef7bcff2847993e3e30b9944bffc\": container with ID starting with ab3f1da2aa9a3818177cd17d4ee3c62a4dc9ef7bcff2847993e3e30b9944bffc not found: ID does not exist" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.449694 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-config\") pod \"bda1c106-87ee-4d5a-b83f-1670a89f9f8c\" (UID: \"bda1c106-87ee-4d5a-b83f-1670a89f9f8c\") " Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.449777 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p5bc\" (UniqueName: \"kubernetes.io/projected/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-kube-api-access-2p5bc\") pod \"bda1c106-87ee-4d5a-b83f-1670a89f9f8c\" (UID: \"bda1c106-87ee-4d5a-b83f-1670a89f9f8c\") " Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.449830 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-serving-cert\") pod \"bda1c106-87ee-4d5a-b83f-1670a89f9f8c\" (UID: \"bda1c106-87ee-4d5a-b83f-1670a89f9f8c\") " Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.449888 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-client-ca\") pod \"bda1c106-87ee-4d5a-b83f-1670a89f9f8c\" (UID: \"bda1c106-87ee-4d5a-b83f-1670a89f9f8c\") " Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.450611 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-config" (OuterVolumeSpecName: "config") pod "bda1c106-87ee-4d5a-b83f-1670a89f9f8c" (UID: "bda1c106-87ee-4d5a-b83f-1670a89f9f8c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.450877 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-client-ca" (OuterVolumeSpecName: "client-ca") pod "bda1c106-87ee-4d5a-b83f-1670a89f9f8c" (UID: "bda1c106-87ee-4d5a-b83f-1670a89f9f8c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.460365 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bda1c106-87ee-4d5a-b83f-1670a89f9f8c" (UID: "bda1c106-87ee-4d5a-b83f-1670a89f9f8c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.460597 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-kube-api-access-2p5bc" (OuterVolumeSpecName: "kube-api-access-2p5bc") pod "bda1c106-87ee-4d5a-b83f-1670a89f9f8c" (UID: "bda1c106-87ee-4d5a-b83f-1670a89f9f8c"). InnerVolumeSpecName "kube-api-access-2p5bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.551325 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.551353 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p5bc\" (UniqueName: \"kubernetes.io/projected/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-kube-api-access-2p5bc\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.551367 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.551375 4981 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bda1c106-87ee-4d5a-b83f-1670a89f9f8c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.699205 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr"] Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.705493 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84f5659d8d-c57kr"] Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.948882 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr"] Feb 27 18:51:01 crc kubenswrapper[4981]: E0227 18:51:01.949226 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" containerName="registry-server" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.949247 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" containerName="registry-server" Feb 27 18:51:01 crc kubenswrapper[4981]: E0227 18:51:01.949267 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" containerName="registry-server" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.949279 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" containerName="registry-server" Feb 27 18:51:01 crc kubenswrapper[4981]: E0227 18:51:01.949307 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" containerName="extract-content" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.949320 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" containerName="extract-content" Feb 27 18:51:01 crc kubenswrapper[4981]: E0227 18:51:01.949343 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" containerName="extract-utilities" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.949356 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" containerName="extract-utilities" Feb 27 18:51:01 crc kubenswrapper[4981]: E0227 18:51:01.949373 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda1c106-87ee-4d5a-b83f-1670a89f9f8c" containerName="route-controller-manager" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.949386 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda1c106-87ee-4d5a-b83f-1670a89f9f8c" containerName="route-controller-manager" Feb 27 18:51:01 crc kubenswrapper[4981]: E0227 18:51:01.949400 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" containerName="extract-utilities" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.949414 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" containerName="extract-utilities" Feb 27 18:51:01 crc kubenswrapper[4981]: E0227 18:51:01.949434 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" containerName="extract-content" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.949465 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" containerName="extract-content" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.949629 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb57a0d2-8b56-43c1-adbd-6d4d3bd17c3c" containerName="registry-server" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.949650 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="bda1c106-87ee-4d5a-b83f-1670a89f9f8c" containerName="route-controller-manager" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.949674 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d12f02-fe5f-4ca7-a190-852ad6284190" containerName="registry-server" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.950299 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.953656 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.953927 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.954102 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.954345 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.954456 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.954614 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 18:51:01 crc kubenswrapper[4981]: I0227 18:51:01.963826 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr"] Feb 27 18:51:02 crc kubenswrapper[4981]: I0227 18:51:02.057400 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cfce39-5959-46e6-9371-ff803ba80a1e-config\") pod \"route-controller-manager-66b857b88f-sqznr\" (UID: \"95cfce39-5959-46e6-9371-ff803ba80a1e\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr" Feb 27 18:51:02 crc kubenswrapper[4981]: I0227 18:51:02.058191 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95cfce39-5959-46e6-9371-ff803ba80a1e-serving-cert\") pod \"route-controller-manager-66b857b88f-sqznr\" (UID: \"95cfce39-5959-46e6-9371-ff803ba80a1e\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr" Feb 27 18:51:02 crc kubenswrapper[4981]: I0227 18:51:02.058576 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq8jm\" (UniqueName: \"kubernetes.io/projected/95cfce39-5959-46e6-9371-ff803ba80a1e-kube-api-access-dq8jm\") pod \"route-controller-manager-66b857b88f-sqznr\" (UID: \"95cfce39-5959-46e6-9371-ff803ba80a1e\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr" Feb 27 18:51:02 crc kubenswrapper[4981]: I0227 18:51:02.059119 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95cfce39-5959-46e6-9371-ff803ba80a1e-client-ca\") pod \"route-controller-manager-66b857b88f-sqznr\" (UID: \"95cfce39-5959-46e6-9371-ff803ba80a1e\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr" Feb 27 18:51:02 crc kubenswrapper[4981]: I0227 18:51:02.160990 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq8jm\" (UniqueName: \"kubernetes.io/projected/95cfce39-5959-46e6-9371-ff803ba80a1e-kube-api-access-dq8jm\") pod \"route-controller-manager-66b857b88f-sqznr\" (UID: \"95cfce39-5959-46e6-9371-ff803ba80a1e\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr" Feb 27 18:51:02 crc kubenswrapper[4981]: I0227 18:51:02.161161 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95cfce39-5959-46e6-9371-ff803ba80a1e-client-ca\") pod \"route-controller-manager-66b857b88f-sqznr\" (UID: \"95cfce39-5959-46e6-9371-ff803ba80a1e\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr" Feb 27 18:51:02 crc kubenswrapper[4981]: I0227 18:51:02.161231 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cfce39-5959-46e6-9371-ff803ba80a1e-config\") pod \"route-controller-manager-66b857b88f-sqznr\" (UID: \"95cfce39-5959-46e6-9371-ff803ba80a1e\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr" Feb 27 18:51:02 crc kubenswrapper[4981]: I0227 18:51:02.161273 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95cfce39-5959-46e6-9371-ff803ba80a1e-serving-cert\") pod \"route-controller-manager-66b857b88f-sqznr\" (UID: \"95cfce39-5959-46e6-9371-ff803ba80a1e\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr" Feb 27 18:51:02 crc kubenswrapper[4981]: I0227 18:51:02.165177 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95cfce39-5959-46e6-9371-ff803ba80a1e-client-ca\") pod \"route-controller-manager-66b857b88f-sqznr\" (UID: \"95cfce39-5959-46e6-9371-ff803ba80a1e\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr" Feb 27 18:51:02 crc kubenswrapper[4981]: I0227 18:51:02.167861 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95cfce39-5959-46e6-9371-ff803ba80a1e-config\") pod \"route-controller-manager-66b857b88f-sqznr\" (UID: \"95cfce39-5959-46e6-9371-ff803ba80a1e\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr" Feb 27 18:51:02 crc kubenswrapper[4981]: I0227 18:51:02.173463 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95cfce39-5959-46e6-9371-ff803ba80a1e-serving-cert\") pod \"route-controller-manager-66b857b88f-sqznr\" (UID: \"95cfce39-5959-46e6-9371-ff803ba80a1e\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr" Feb 27 18:51:02 crc kubenswrapper[4981]: I0227 18:51:02.183750 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq8jm\" (UniqueName: \"kubernetes.io/projected/95cfce39-5959-46e6-9371-ff803ba80a1e-kube-api-access-dq8jm\") pod \"route-controller-manager-66b857b88f-sqznr\" (UID: \"95cfce39-5959-46e6-9371-ff803ba80a1e\") " pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr" Feb 27 18:51:02 crc kubenswrapper[4981]: I0227 18:51:02.276803 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr" Feb 27 18:51:02 crc kubenswrapper[4981]: I0227 18:51:02.776377 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr"] Feb 27 18:51:02 crc kubenswrapper[4981]: W0227 18:51:02.784903 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95cfce39_5959_46e6_9371_ff803ba80a1e.slice/crio-41ba79ebcfcd0e02edc0ec3459be51835072ee82c775addf0f2a7eee31273375 WatchSource:0}: Error finding container 41ba79ebcfcd0e02edc0ec3459be51835072ee82c775addf0f2a7eee31273375: Status 404 returned error can't find the container with id 41ba79ebcfcd0e02edc0ec3459be51835072ee82c775addf0f2a7eee31273375 Feb 27 18:51:03 crc kubenswrapper[4981]: I0227 18:51:03.387105 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr" event={"ID":"95cfce39-5959-46e6-9371-ff803ba80a1e","Type":"ContainerStarted","Data":"1732fcd637cb4c5b17ad731add28b0e02f795978144d82d0b7cb8b061a90ecec"} Feb 27 18:51:03 crc kubenswrapper[4981]: I0227 18:51:03.387522 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr" Feb 27 18:51:03 crc kubenswrapper[4981]: I0227 18:51:03.387542 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr" event={"ID":"95cfce39-5959-46e6-9371-ff803ba80a1e","Type":"ContainerStarted","Data":"41ba79ebcfcd0e02edc0ec3459be51835072ee82c775addf0f2a7eee31273375"} Feb 27 18:51:03 crc kubenswrapper[4981]: I0227 18:51:03.418906 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr" podStartSLOduration=3.418879221 podStartE2EDuration="3.418879221s" podCreationTimestamp="2026-02-27 18:51:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:51:03.414680031 +0000 UTC m=+362.893461211" watchObservedRunningTime="2026-02-27 18:51:03.418879221 +0000 UTC m=+362.897660411" Feb 27 18:51:03 crc kubenswrapper[4981]: I0227 18:51:03.504328 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-66b857b88f-sqznr" Feb 27 18:51:03 crc kubenswrapper[4981]: I0227 18:51:03.639019 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bda1c106-87ee-4d5a-b83f-1670a89f9f8c" path="/var/lib/kubelet/pods/bda1c106-87ee-4d5a-b83f-1670a89f9f8c/volumes" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.564799 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5b8m4"] Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.565990 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.577362 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5b8m4"] Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.700306 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7f5d4618-f20e-4b02-a002-38ba0bd548b7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.700376 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z7jd\" (UniqueName: \"kubernetes.io/projected/7f5d4618-f20e-4b02-a002-38ba0bd548b7-kube-api-access-7z7jd\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.700405 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f5d4618-f20e-4b02-a002-38ba0bd548b7-bound-sa-token\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.700427 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f5d4618-f20e-4b02-a002-38ba0bd548b7-trusted-ca\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.700448 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7f5d4618-f20e-4b02-a002-38ba0bd548b7-registry-certificates\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.700573 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7f5d4618-f20e-4b02-a002-38ba0bd548b7-registry-tls\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.700605 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.700693 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7f5d4618-f20e-4b02-a002-38ba0bd548b7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.721457 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.802303 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7f5d4618-f20e-4b02-a002-38ba0bd548b7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.802449 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7f5d4618-f20e-4b02-a002-38ba0bd548b7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.802512 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z7jd\" (UniqueName: \"kubernetes.io/projected/7f5d4618-f20e-4b02-a002-38ba0bd548b7-kube-api-access-7z7jd\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.802545 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f5d4618-f20e-4b02-a002-38ba0bd548b7-trusted-ca\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.802576 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f5d4618-f20e-4b02-a002-38ba0bd548b7-bound-sa-token\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.802608 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7f5d4618-f20e-4b02-a002-38ba0bd548b7-registry-certificates\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.802677 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7f5d4618-f20e-4b02-a002-38ba0bd548b7-registry-tls\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.803219 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7f5d4618-f20e-4b02-a002-38ba0bd548b7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.805149 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7f5d4618-f20e-4b02-a002-38ba0bd548b7-registry-certificates\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.805208 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f5d4618-f20e-4b02-a002-38ba0bd548b7-trusted-ca\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.809912 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7f5d4618-f20e-4b02-a002-38ba0bd548b7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.816767 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7f5d4618-f20e-4b02-a002-38ba0bd548b7-registry-tls\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.830384 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f5d4618-f20e-4b02-a002-38ba0bd548b7-bound-sa-token\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.831424 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z7jd\" (UniqueName: \"kubernetes.io/projected/7f5d4618-f20e-4b02-a002-38ba0bd548b7-kube-api-access-7z7jd\") pod \"image-registry-66df7c8f76-5b8m4\" (UID: \"7f5d4618-f20e-4b02-a002-38ba0bd548b7\") " pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:19 crc kubenswrapper[4981]: I0227 18:51:19.891403 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:20 crc kubenswrapper[4981]: I0227 18:51:20.370643 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5b8m4"] Feb 27 18:51:20 crc kubenswrapper[4981]: I0227 18:51:20.500704 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" event={"ID":"7f5d4618-f20e-4b02-a002-38ba0bd548b7","Type":"ContainerStarted","Data":"6c6e94fe3be82dec3acde8567fa790cd62f90daf4e70216d3931dd7f23e8f70e"} Feb 27 18:51:20 crc kubenswrapper[4981]: I0227 18:51:20.688193 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-597c79bdbb-bd7qb"] Feb 27 18:51:20 crc kubenswrapper[4981]: I0227 18:51:20.688516 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" podUID="0d6ffbc4-17da-44ee-8e00-66601141abd7" containerName="controller-manager" containerID="cri-o://7c9686296711fbe3ff893c1f590b0841ff9a92aacc41723d6af978383eb22d0c" gracePeriod=30 Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.218784 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.324312 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d6ffbc4-17da-44ee-8e00-66601141abd7-serving-cert\") pod \"0d6ffbc4-17da-44ee-8e00-66601141abd7\" (UID: \"0d6ffbc4-17da-44ee-8e00-66601141abd7\") " Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.324375 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d6ffbc4-17da-44ee-8e00-66601141abd7-client-ca\") pod \"0d6ffbc4-17da-44ee-8e00-66601141abd7\" (UID: \"0d6ffbc4-17da-44ee-8e00-66601141abd7\") " Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.324441 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d6ffbc4-17da-44ee-8e00-66601141abd7-config\") pod \"0d6ffbc4-17da-44ee-8e00-66601141abd7\" (UID: \"0d6ffbc4-17da-44ee-8e00-66601141abd7\") " Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.324516 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw42x\" (UniqueName: \"kubernetes.io/projected/0d6ffbc4-17da-44ee-8e00-66601141abd7-kube-api-access-sw42x\") pod \"0d6ffbc4-17da-44ee-8e00-66601141abd7\" (UID: \"0d6ffbc4-17da-44ee-8e00-66601141abd7\") " Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.324623 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d6ffbc4-17da-44ee-8e00-66601141abd7-proxy-ca-bundles\") pod \"0d6ffbc4-17da-44ee-8e00-66601141abd7\" (UID: \"0d6ffbc4-17da-44ee-8e00-66601141abd7\") " Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.325657 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d6ffbc4-17da-44ee-8e00-66601141abd7-client-ca" (OuterVolumeSpecName: "client-ca") pod "0d6ffbc4-17da-44ee-8e00-66601141abd7" (UID: "0d6ffbc4-17da-44ee-8e00-66601141abd7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.325804 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d6ffbc4-17da-44ee-8e00-66601141abd7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0d6ffbc4-17da-44ee-8e00-66601141abd7" (UID: "0d6ffbc4-17da-44ee-8e00-66601141abd7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.325933 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d6ffbc4-17da-44ee-8e00-66601141abd7-config" (OuterVolumeSpecName: "config") pod "0d6ffbc4-17da-44ee-8e00-66601141abd7" (UID: "0d6ffbc4-17da-44ee-8e00-66601141abd7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.330573 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d6ffbc4-17da-44ee-8e00-66601141abd7-kube-api-access-sw42x" (OuterVolumeSpecName: "kube-api-access-sw42x") pod "0d6ffbc4-17da-44ee-8e00-66601141abd7" (UID: "0d6ffbc4-17da-44ee-8e00-66601141abd7"). InnerVolumeSpecName "kube-api-access-sw42x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.331233 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d6ffbc4-17da-44ee-8e00-66601141abd7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0d6ffbc4-17da-44ee-8e00-66601141abd7" (UID: "0d6ffbc4-17da-44ee-8e00-66601141abd7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.426023 4981 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d6ffbc4-17da-44ee-8e00-66601141abd7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.426172 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d6ffbc4-17da-44ee-8e00-66601141abd7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.426242 4981 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d6ffbc4-17da-44ee-8e00-66601141abd7-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.426307 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d6ffbc4-17da-44ee-8e00-66601141abd7-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.426363 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw42x\" (UniqueName: \"kubernetes.io/projected/0d6ffbc4-17da-44ee-8e00-66601141abd7-kube-api-access-sw42x\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.510195 4981 generic.go:334] "Generic (PLEG): container finished" podID="0d6ffbc4-17da-44ee-8e00-66601141abd7" containerID="7c9686296711fbe3ff893c1f590b0841ff9a92aacc41723d6af978383eb22d0c" exitCode=0 Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.510284 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" event={"ID":"0d6ffbc4-17da-44ee-8e00-66601141abd7","Type":"ContainerDied","Data":"7c9686296711fbe3ff893c1f590b0841ff9a92aacc41723d6af978383eb22d0c"} Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.510299 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.510329 4981 scope.go:117] "RemoveContainer" containerID="7c9686296711fbe3ff893c1f590b0841ff9a92aacc41723d6af978383eb22d0c" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.510316 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-597c79bdbb-bd7qb" event={"ID":"0d6ffbc4-17da-44ee-8e00-66601141abd7","Type":"ContainerDied","Data":"e6d918b7442ef45c636d9dfa83413dd7cdc46d802b09d93f46541c4c0fd8e8ff"} Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.513296 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.514046 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" event={"ID":"7f5d4618-f20e-4b02-a002-38ba0bd548b7","Type":"ContainerStarted","Data":"53f7068e53dbc29098c2bae358d640323fb20c7f10c88cc39023303d92adfe98"} Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.540223 4981 scope.go:117] "RemoveContainer" containerID="7c9686296711fbe3ff893c1f590b0841ff9a92aacc41723d6af978383eb22d0c" Feb 27 18:51:21 crc kubenswrapper[4981]: E0227 18:51:21.542141 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c9686296711fbe3ff893c1f590b0841ff9a92aacc41723d6af978383eb22d0c\": container with ID starting with 7c9686296711fbe3ff893c1f590b0841ff9a92aacc41723d6af978383eb22d0c not found: ID does not exist" containerID="7c9686296711fbe3ff893c1f590b0841ff9a92aacc41723d6af978383eb22d0c" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.542191 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c9686296711fbe3ff893c1f590b0841ff9a92aacc41723d6af978383eb22d0c"} err="failed to get container status \"7c9686296711fbe3ff893c1f590b0841ff9a92aacc41723d6af978383eb22d0c\": rpc error: code = NotFound desc = could not find container \"7c9686296711fbe3ff893c1f590b0841ff9a92aacc41723d6af978383eb22d0c\": container with ID starting with 7c9686296711fbe3ff893c1f590b0841ff9a92aacc41723d6af978383eb22d0c not found: ID does not exist" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.550090 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" podStartSLOduration=2.550038348 podStartE2EDuration="2.550038348s" podCreationTimestamp="2026-02-27 18:51:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:51:21.539836343 +0000 UTC m=+381.018617503" watchObservedRunningTime="2026-02-27 18:51:21.550038348 +0000 UTC m=+381.028819538" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.564698 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-597c79bdbb-bd7qb"] Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.570604 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-597c79bdbb-bd7qb"] Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.637380 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d6ffbc4-17da-44ee-8e00-66601141abd7" path="/var/lib/kubelet/pods/0d6ffbc4-17da-44ee-8e00-66601141abd7/volumes" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.958015 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-658f9978cf-cptv2"] Feb 27 18:51:21 crc kubenswrapper[4981]: E0227 18:51:21.958348 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6ffbc4-17da-44ee-8e00-66601141abd7" containerName="controller-manager" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.958369 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6ffbc4-17da-44ee-8e00-66601141abd7" containerName="controller-manager" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.958552 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d6ffbc4-17da-44ee-8e00-66601141abd7" containerName="controller-manager" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.959145 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.961548 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.969661 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.969669 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.970211 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.970349 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.970471 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.975617 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 18:51:21 crc kubenswrapper[4981]: I0227 18:51:21.977431 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-658f9978cf-cptv2"] Feb 27 18:51:22 crc kubenswrapper[4981]: I0227 18:51:22.035626 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/925a4581-7c40-4140-8907-fb89d8d43558-config\") pod \"controller-manager-658f9978cf-cptv2\" (UID: \"925a4581-7c40-4140-8907-fb89d8d43558\") " pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" Feb 27 18:51:22 crc kubenswrapper[4981]: I0227 18:51:22.035827 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjcwg\" (UniqueName: \"kubernetes.io/projected/925a4581-7c40-4140-8907-fb89d8d43558-kube-api-access-jjcwg\") pod \"controller-manager-658f9978cf-cptv2\" (UID: \"925a4581-7c40-4140-8907-fb89d8d43558\") " pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" Feb 27 18:51:22 crc kubenswrapper[4981]: I0227 18:51:22.036016 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/925a4581-7c40-4140-8907-fb89d8d43558-proxy-ca-bundles\") pod \"controller-manager-658f9978cf-cptv2\" (UID: \"925a4581-7c40-4140-8907-fb89d8d43558\") " pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" Feb 27 18:51:22 crc kubenswrapper[4981]: I0227 18:51:22.036198 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/925a4581-7c40-4140-8907-fb89d8d43558-client-ca\") pod \"controller-manager-658f9978cf-cptv2\" (UID: \"925a4581-7c40-4140-8907-fb89d8d43558\") " pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" Feb 27 18:51:22 crc kubenswrapper[4981]: I0227 18:51:22.036283 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/925a4581-7c40-4140-8907-fb89d8d43558-serving-cert\") pod \"controller-manager-658f9978cf-cptv2\" (UID: \"925a4581-7c40-4140-8907-fb89d8d43558\") " pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" Feb 27 18:51:22 crc kubenswrapper[4981]: I0227 18:51:22.137507 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/925a4581-7c40-4140-8907-fb89d8d43558-serving-cert\") pod \"controller-manager-658f9978cf-cptv2\" (UID: \"925a4581-7c40-4140-8907-fb89d8d43558\") " pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" Feb 27 18:51:22 crc kubenswrapper[4981]: I0227 18:51:22.137588 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/925a4581-7c40-4140-8907-fb89d8d43558-config\") pod \"controller-manager-658f9978cf-cptv2\" (UID: \"925a4581-7c40-4140-8907-fb89d8d43558\") " pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" Feb 27 18:51:22 crc kubenswrapper[4981]: I0227 18:51:22.137689 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjcwg\" (UniqueName: \"kubernetes.io/projected/925a4581-7c40-4140-8907-fb89d8d43558-kube-api-access-jjcwg\") pod \"controller-manager-658f9978cf-cptv2\" (UID: \"925a4581-7c40-4140-8907-fb89d8d43558\") " pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" Feb 27 18:51:22 crc kubenswrapper[4981]: I0227 18:51:22.137728 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/925a4581-7c40-4140-8907-fb89d8d43558-proxy-ca-bundles\") pod \"controller-manager-658f9978cf-cptv2\" (UID: \"925a4581-7c40-4140-8907-fb89d8d43558\") " pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" Feb 27 18:51:22 crc kubenswrapper[4981]: I0227 18:51:22.137795 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/925a4581-7c40-4140-8907-fb89d8d43558-client-ca\") pod \"controller-manager-658f9978cf-cptv2\" (UID: \"925a4581-7c40-4140-8907-fb89d8d43558\") " pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" Feb 27 18:51:22 crc kubenswrapper[4981]: I0227 18:51:22.140175 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/925a4581-7c40-4140-8907-fb89d8d43558-config\") pod \"controller-manager-658f9978cf-cptv2\" (UID: \"925a4581-7c40-4140-8907-fb89d8d43558\") " pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" Feb 27 18:51:22 crc kubenswrapper[4981]: I0227 18:51:22.140245 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/925a4581-7c40-4140-8907-fb89d8d43558-client-ca\") pod \"controller-manager-658f9978cf-cptv2\" (UID: \"925a4581-7c40-4140-8907-fb89d8d43558\") " pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" Feb 27 18:51:22 crc kubenswrapper[4981]: I0227 18:51:22.140363 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/925a4581-7c40-4140-8907-fb89d8d43558-proxy-ca-bundles\") pod \"controller-manager-658f9978cf-cptv2\" (UID: \"925a4581-7c40-4140-8907-fb89d8d43558\") " pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" Feb 27 18:51:22 crc kubenswrapper[4981]: I0227 18:51:22.144281 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/925a4581-7c40-4140-8907-fb89d8d43558-serving-cert\") pod \"controller-manager-658f9978cf-cptv2\" (UID: \"925a4581-7c40-4140-8907-fb89d8d43558\") " pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" Feb 27 18:51:22 crc kubenswrapper[4981]: I0227 18:51:22.160332 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjcwg\" (UniqueName: \"kubernetes.io/projected/925a4581-7c40-4140-8907-fb89d8d43558-kube-api-access-jjcwg\") pod \"controller-manager-658f9978cf-cptv2\" (UID: \"925a4581-7c40-4140-8907-fb89d8d43558\") " pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" Feb 27 18:51:22 crc kubenswrapper[4981]: I0227 18:51:22.294342 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" Feb 27 18:51:22 crc kubenswrapper[4981]: I0227 18:51:22.608089 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-658f9978cf-cptv2"] Feb 27 18:51:22 crc kubenswrapper[4981]: W0227 18:51:22.610526 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod925a4581_7c40_4140_8907_fb89d8d43558.slice/crio-713503ee1237c87f0277c39e7f4b0cadfae405f5ea38eda7c26a75d90ed8f779 WatchSource:0}: Error finding container 713503ee1237c87f0277c39e7f4b0cadfae405f5ea38eda7c26a75d90ed8f779: Status 404 returned error can't find the container with id 713503ee1237c87f0277c39e7f4b0cadfae405f5ea38eda7c26a75d90ed8f779 Feb 27 18:51:23 crc kubenswrapper[4981]: I0227 18:51:23.532855 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" event={"ID":"925a4581-7c40-4140-8907-fb89d8d43558","Type":"ContainerStarted","Data":"61a3ed840088a65a0b38d6be11574be332ca134474555d993f2b011700eddbf8"} Feb 27 18:51:23 crc kubenswrapper[4981]: I0227 18:51:23.533321 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" event={"ID":"925a4581-7c40-4140-8907-fb89d8d43558","Type":"ContainerStarted","Data":"713503ee1237c87f0277c39e7f4b0cadfae405f5ea38eda7c26a75d90ed8f779"} Feb 27 18:51:23 crc kubenswrapper[4981]: I0227 18:51:23.568567 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" podStartSLOduration=3.56854414 podStartE2EDuration="3.56854414s" podCreationTimestamp="2026-02-27 18:51:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:51:23.563388061 +0000 UTC m=+383.042169251" watchObservedRunningTime="2026-02-27 18:51:23.56854414 +0000 UTC m=+383.047325330" Feb 27 18:51:24 crc kubenswrapper[4981]: I0227 18:51:24.539040 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" Feb 27 18:51:24 crc kubenswrapper[4981]: I0227 18:51:24.546196 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-658f9978cf-cptv2" Feb 27 18:51:39 crc kubenswrapper[4981]: I0227 18:51:39.902389 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-5b8m4" Feb 27 18:51:39 crc kubenswrapper[4981]: I0227 18:51:39.991483 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kmvhm"] Feb 27 18:51:50 crc kubenswrapper[4981]: I0227 18:51:50.249014 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:51:50 crc kubenswrapper[4981]: I0227 18:51:50.250282 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.382453 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m9ppw"] Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.384648 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m9ppw" podUID="e3bd579c-4d5b-496d-bade-9a78e439970d" containerName="registry-server" containerID="cri-o://f11af0fe2f067bdd8546479b54f3936d45147bcd6b90d3946ed7619fbc83c4a2" gracePeriod=30 Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.394972 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fzncx"] Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.395607 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fzncx" podUID="a8d010a2-1cec-4e71-ac60-29b2e20787f4" containerName="registry-server" containerID="cri-o://5f610ebf3095dd86017eb6aba0ff01b369c00986eb1dc58f0b78e391f5b9edf8" gracePeriod=30 Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.415132 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pcmgp"] Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.416167 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" podUID="ef632318-2ac5-418d-b9d4-dcd616b4d768" containerName="marketplace-operator" containerID="cri-o://34b5f6a6014362fa4ca77f038657ff566565bf3181688e1629ac15b25e8a622a" gracePeriod=30 Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.425197 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6cjfh"] Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.426284 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6cjfh" Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.432429 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6nhjk"] Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.432646 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6nhjk" podUID="afecaba0-c366-4a2f-a944-1a282869a955" containerName="registry-server" containerID="cri-o://4a3bcf4a243f97dc697ec081553027686846d6aeee8302d0abf61274e95efeea" gracePeriod=30 Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.447164 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6cjfh"] Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.449970 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rmtpf"] Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.450436 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rmtpf" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" containerName="registry-server" containerID="cri-o://513e8610a9893c11f22b17fcb728e24e054da4bdb201a345b051f58818455550" gracePeriod=30 Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.485687 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5fc084d6-4cd6-4556-a0ba-80b909119353-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6cjfh\" (UID: \"5fc084d6-4cd6-4556-a0ba-80b909119353\") " pod="openshift-marketplace/marketplace-operator-79b997595-6cjfh" Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.485757 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5fc084d6-4cd6-4556-a0ba-80b909119353-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6cjfh\" (UID: \"5fc084d6-4cd6-4556-a0ba-80b909119353\") " pod="openshift-marketplace/marketplace-operator-79b997595-6cjfh" Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.485786 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qtlz\" (UniqueName: \"kubernetes.io/projected/5fc084d6-4cd6-4556-a0ba-80b909119353-kube-api-access-8qtlz\") pod \"marketplace-operator-79b997595-6cjfh\" (UID: \"5fc084d6-4cd6-4556-a0ba-80b909119353\") " pod="openshift-marketplace/marketplace-operator-79b997595-6cjfh" Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.586650 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5fc084d6-4cd6-4556-a0ba-80b909119353-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6cjfh\" (UID: \"5fc084d6-4cd6-4556-a0ba-80b909119353\") " pod="openshift-marketplace/marketplace-operator-79b997595-6cjfh" Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.586713 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5fc084d6-4cd6-4556-a0ba-80b909119353-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6cjfh\" (UID: \"5fc084d6-4cd6-4556-a0ba-80b909119353\") " pod="openshift-marketplace/marketplace-operator-79b997595-6cjfh" Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.586742 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qtlz\" (UniqueName: \"kubernetes.io/projected/5fc084d6-4cd6-4556-a0ba-80b909119353-kube-api-access-8qtlz\") pod \"marketplace-operator-79b997595-6cjfh\" (UID: \"5fc084d6-4cd6-4556-a0ba-80b909119353\") " pod="openshift-marketplace/marketplace-operator-79b997595-6cjfh" Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.588469 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5fc084d6-4cd6-4556-a0ba-80b909119353-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6cjfh\" (UID: \"5fc084d6-4cd6-4556-a0ba-80b909119353\") " pod="openshift-marketplace/marketplace-operator-79b997595-6cjfh" Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.600012 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5fc084d6-4cd6-4556-a0ba-80b909119353-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6cjfh\" (UID: \"5fc084d6-4cd6-4556-a0ba-80b909119353\") " pod="openshift-marketplace/marketplace-operator-79b997595-6cjfh" Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.604296 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qtlz\" (UniqueName: \"kubernetes.io/projected/5fc084d6-4cd6-4556-a0ba-80b909119353-kube-api-access-8qtlz\") pod \"marketplace-operator-79b997595-6cjfh\" (UID: \"5fc084d6-4cd6-4556-a0ba-80b909119353\") " pod="openshift-marketplace/marketplace-operator-79b997595-6cjfh" Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.744963 4981 generic.go:334] "Generic (PLEG): container finished" podID="e3bd579c-4d5b-496d-bade-9a78e439970d" containerID="f11af0fe2f067bdd8546479b54f3936d45147bcd6b90d3946ed7619fbc83c4a2" exitCode=0 Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.745047 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m9ppw" event={"ID":"e3bd579c-4d5b-496d-bade-9a78e439970d","Type":"ContainerDied","Data":"f11af0fe2f067bdd8546479b54f3936d45147bcd6b90d3946ed7619fbc83c4a2"} Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.748247 4981 generic.go:334] "Generic (PLEG): container finished" podID="ef632318-2ac5-418d-b9d4-dcd616b4d768" containerID="34b5f6a6014362fa4ca77f038657ff566565bf3181688e1629ac15b25e8a622a" exitCode=0 Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.748318 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" event={"ID":"ef632318-2ac5-418d-b9d4-dcd616b4d768","Type":"ContainerDied","Data":"34b5f6a6014362fa4ca77f038657ff566565bf3181688e1629ac15b25e8a622a"} Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.750689 4981 generic.go:334] "Generic (PLEG): container finished" podID="a8d010a2-1cec-4e71-ac60-29b2e20787f4" containerID="5f610ebf3095dd86017eb6aba0ff01b369c00986eb1dc58f0b78e391f5b9edf8" exitCode=0 Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.750752 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fzncx" event={"ID":"a8d010a2-1cec-4e71-ac60-29b2e20787f4","Type":"ContainerDied","Data":"5f610ebf3095dd86017eb6aba0ff01b369c00986eb1dc58f0b78e391f5b9edf8"} Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.751235 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6cjfh" Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.756563 4981 generic.go:334] "Generic (PLEG): container finished" podID="afecaba0-c366-4a2f-a944-1a282869a955" containerID="4a3bcf4a243f97dc697ec081553027686846d6aeee8302d0abf61274e95efeea" exitCode=0 Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.756610 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6nhjk" event={"ID":"afecaba0-c366-4a2f-a944-1a282869a955","Type":"ContainerDied","Data":"4a3bcf4a243f97dc697ec081553027686846d6aeee8302d0abf61274e95efeea"} Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.759170 4981 generic.go:334] "Generic (PLEG): container finished" podID="fbc8a428-3dab-402e-a105-0576aa196dcc" containerID="513e8610a9893c11f22b17fcb728e24e054da4bdb201a345b051f58818455550" exitCode=0 Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.759199 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmtpf" event={"ID":"fbc8a428-3dab-402e-a105-0576aa196dcc","Type":"ContainerDied","Data":"513e8610a9893c11f22b17fcb728e24e054da4bdb201a345b051f58818455550"} Feb 27 18:51:52 crc kubenswrapper[4981]: I0227 18:51:52.898431 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fzncx" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.004229 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d010a2-1cec-4e71-ac60-29b2e20787f4-utilities\") pod \"a8d010a2-1cec-4e71-ac60-29b2e20787f4\" (UID: \"a8d010a2-1cec-4e71-ac60-29b2e20787f4\") " Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.004284 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d010a2-1cec-4e71-ac60-29b2e20787f4-catalog-content\") pod \"a8d010a2-1cec-4e71-ac60-29b2e20787f4\" (UID: \"a8d010a2-1cec-4e71-ac60-29b2e20787f4\") " Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.004381 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb2rc\" (UniqueName: \"kubernetes.io/projected/a8d010a2-1cec-4e71-ac60-29b2e20787f4-kube-api-access-bb2rc\") pod \"a8d010a2-1cec-4e71-ac60-29b2e20787f4\" (UID: \"a8d010a2-1cec-4e71-ac60-29b2e20787f4\") " Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.006661 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8d010a2-1cec-4e71-ac60-29b2e20787f4-utilities" (OuterVolumeSpecName: "utilities") pod "a8d010a2-1cec-4e71-ac60-29b2e20787f4" (UID: "a8d010a2-1cec-4e71-ac60-29b2e20787f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.009919 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8d010a2-1cec-4e71-ac60-29b2e20787f4-kube-api-access-bb2rc" (OuterVolumeSpecName: "kube-api-access-bb2rc") pod "a8d010a2-1cec-4e71-ac60-29b2e20787f4" (UID: "a8d010a2-1cec-4e71-ac60-29b2e20787f4"). InnerVolumeSpecName "kube-api-access-bb2rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.076919 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8d010a2-1cec-4e71-ac60-29b2e20787f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8d010a2-1cec-4e71-ac60-29b2e20787f4" (UID: "a8d010a2-1cec-4e71-ac60-29b2e20787f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.105428 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8d010a2-1cec-4e71-ac60-29b2e20787f4-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.105458 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8d010a2-1cec-4e71-ac60-29b2e20787f4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.105470 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb2rc\" (UniqueName: \"kubernetes.io/projected/a8d010a2-1cec-4e71-ac60-29b2e20787f4-kube-api-access-bb2rc\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.109878 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmtpf" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.133166 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m9ppw" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.133874 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6nhjk" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.154842 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.206344 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7hmb\" (UniqueName: \"kubernetes.io/projected/ef632318-2ac5-418d-b9d4-dcd616b4d768-kube-api-access-s7hmb\") pod \"ef632318-2ac5-418d-b9d4-dcd616b4d768\" (UID: \"ef632318-2ac5-418d-b9d4-dcd616b4d768\") " Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.206388 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc8a428-3dab-402e-a105-0576aa196dcc-catalog-content\") pod \"fbc8a428-3dab-402e-a105-0576aa196dcc\" (UID: \"fbc8a428-3dab-402e-a105-0576aa196dcc\") " Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.206461 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3bd579c-4d5b-496d-bade-9a78e439970d-catalog-content\") pod \"e3bd579c-4d5b-496d-bade-9a78e439970d\" (UID: \"e3bd579c-4d5b-496d-bade-9a78e439970d\") " Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.206490 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc8a428-3dab-402e-a105-0576aa196dcc-utilities\") pod \"fbc8a428-3dab-402e-a105-0576aa196dcc\" (UID: \"fbc8a428-3dab-402e-a105-0576aa196dcc\") " Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.206511 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afecaba0-c366-4a2f-a944-1a282869a955-utilities\") pod \"afecaba0-c366-4a2f-a944-1a282869a955\" (UID: \"afecaba0-c366-4a2f-a944-1a282869a955\") " Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.206530 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ef632318-2ac5-418d-b9d4-dcd616b4d768-marketplace-operator-metrics\") pod \"ef632318-2ac5-418d-b9d4-dcd616b4d768\" (UID: \"ef632318-2ac5-418d-b9d4-dcd616b4d768\") " Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.206558 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkndf\" (UniqueName: \"kubernetes.io/projected/fbc8a428-3dab-402e-a105-0576aa196dcc-kube-api-access-rkndf\") pod \"fbc8a428-3dab-402e-a105-0576aa196dcc\" (UID: \"fbc8a428-3dab-402e-a105-0576aa196dcc\") " Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.206579 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef632318-2ac5-418d-b9d4-dcd616b4d768-marketplace-trusted-ca\") pod \"ef632318-2ac5-418d-b9d4-dcd616b4d768\" (UID: \"ef632318-2ac5-418d-b9d4-dcd616b4d768\") " Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.206597 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lmqj\" (UniqueName: \"kubernetes.io/projected/e3bd579c-4d5b-496d-bade-9a78e439970d-kube-api-access-5lmqj\") pod \"e3bd579c-4d5b-496d-bade-9a78e439970d\" (UID: \"e3bd579c-4d5b-496d-bade-9a78e439970d\") " Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.206624 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhr5l\" (UniqueName: \"kubernetes.io/projected/afecaba0-c366-4a2f-a944-1a282869a955-kube-api-access-nhr5l\") pod \"afecaba0-c366-4a2f-a944-1a282869a955\" (UID: \"afecaba0-c366-4a2f-a944-1a282869a955\") " Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.206646 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3bd579c-4d5b-496d-bade-9a78e439970d-utilities\") pod \"e3bd579c-4d5b-496d-bade-9a78e439970d\" (UID: \"e3bd579c-4d5b-496d-bade-9a78e439970d\") " Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.206665 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afecaba0-c366-4a2f-a944-1a282869a955-catalog-content\") pod \"afecaba0-c366-4a2f-a944-1a282869a955\" (UID: \"afecaba0-c366-4a2f-a944-1a282869a955\") " Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.207869 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3bd579c-4d5b-496d-bade-9a78e439970d-utilities" (OuterVolumeSpecName: "utilities") pod "e3bd579c-4d5b-496d-bade-9a78e439970d" (UID: "e3bd579c-4d5b-496d-bade-9a78e439970d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.209431 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbc8a428-3dab-402e-a105-0576aa196dcc-utilities" (OuterVolumeSpecName: "utilities") pod "fbc8a428-3dab-402e-a105-0576aa196dcc" (UID: "fbc8a428-3dab-402e-a105-0576aa196dcc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.209907 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef632318-2ac5-418d-b9d4-dcd616b4d768-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "ef632318-2ac5-418d-b9d4-dcd616b4d768" (UID: "ef632318-2ac5-418d-b9d4-dcd616b4d768"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.210006 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef632318-2ac5-418d-b9d4-dcd616b4d768-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "ef632318-2ac5-418d-b9d4-dcd616b4d768" (UID: "ef632318-2ac5-418d-b9d4-dcd616b4d768"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.210391 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afecaba0-c366-4a2f-a944-1a282869a955-utilities" (OuterVolumeSpecName: "utilities") pod "afecaba0-c366-4a2f-a944-1a282869a955" (UID: "afecaba0-c366-4a2f-a944-1a282869a955"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.210734 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3bd579c-4d5b-496d-bade-9a78e439970d-kube-api-access-5lmqj" (OuterVolumeSpecName: "kube-api-access-5lmqj") pod "e3bd579c-4d5b-496d-bade-9a78e439970d" (UID: "e3bd579c-4d5b-496d-bade-9a78e439970d"). InnerVolumeSpecName "kube-api-access-5lmqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.211680 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afecaba0-c366-4a2f-a944-1a282869a955-kube-api-access-nhr5l" (OuterVolumeSpecName: "kube-api-access-nhr5l") pod "afecaba0-c366-4a2f-a944-1a282869a955" (UID: "afecaba0-c366-4a2f-a944-1a282869a955"). InnerVolumeSpecName "kube-api-access-nhr5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.212605 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbc8a428-3dab-402e-a105-0576aa196dcc-kube-api-access-rkndf" (OuterVolumeSpecName: "kube-api-access-rkndf") pod "fbc8a428-3dab-402e-a105-0576aa196dcc" (UID: "fbc8a428-3dab-402e-a105-0576aa196dcc"). InnerVolumeSpecName "kube-api-access-rkndf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.213472 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef632318-2ac5-418d-b9d4-dcd616b4d768-kube-api-access-s7hmb" (OuterVolumeSpecName: "kube-api-access-s7hmb") pod "ef632318-2ac5-418d-b9d4-dcd616b4d768" (UID: "ef632318-2ac5-418d-b9d4-dcd616b4d768"). InnerVolumeSpecName "kube-api-access-s7hmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.230185 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afecaba0-c366-4a2f-a944-1a282869a955-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afecaba0-c366-4a2f-a944-1a282869a955" (UID: "afecaba0-c366-4a2f-a944-1a282869a955"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.272824 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6cjfh"] Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.302696 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3bd579c-4d5b-496d-bade-9a78e439970d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3bd579c-4d5b-496d-bade-9a78e439970d" (UID: "e3bd579c-4d5b-496d-bade-9a78e439970d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.307981 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3bd579c-4d5b-496d-bade-9a78e439970d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.308001 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbc8a428-3dab-402e-a105-0576aa196dcc-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.308010 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afecaba0-c366-4a2f-a944-1a282869a955-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.308019 4981 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ef632318-2ac5-418d-b9d4-dcd616b4d768-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.308031 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkndf\" (UniqueName: \"kubernetes.io/projected/fbc8a428-3dab-402e-a105-0576aa196dcc-kube-api-access-rkndf\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.308039 4981 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ef632318-2ac5-418d-b9d4-dcd616b4d768-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.308068 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lmqj\" (UniqueName: \"kubernetes.io/projected/e3bd579c-4d5b-496d-bade-9a78e439970d-kube-api-access-5lmqj\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.308098 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhr5l\" (UniqueName: \"kubernetes.io/projected/afecaba0-c366-4a2f-a944-1a282869a955-kube-api-access-nhr5l\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.308106 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3bd579c-4d5b-496d-bade-9a78e439970d-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.308115 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afecaba0-c366-4a2f-a944-1a282869a955-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.308123 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7hmb\" (UniqueName: \"kubernetes.io/projected/ef632318-2ac5-418d-b9d4-dcd616b4d768-kube-api-access-s7hmb\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.353933 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbc8a428-3dab-402e-a105-0576aa196dcc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbc8a428-3dab-402e-a105-0576aa196dcc" (UID: "fbc8a428-3dab-402e-a105-0576aa196dcc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.409733 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbc8a428-3dab-402e-a105-0576aa196dcc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.766721 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6cjfh" event={"ID":"5fc084d6-4cd6-4556-a0ba-80b909119353","Type":"ContainerStarted","Data":"89b457bfb4f1b0640e0d376e7db8b4f03c58451b69f1438fa06476ca304af40b"} Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.766769 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6cjfh" event={"ID":"5fc084d6-4cd6-4556-a0ba-80b909119353","Type":"ContainerStarted","Data":"1b0281b08dab57b3c6a74057848d791642dad31c3c85fd532329f88b354d2c6c"} Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.766966 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6cjfh" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.770236 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m9ppw" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.770251 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m9ppw" event={"ID":"e3bd579c-4d5b-496d-bade-9a78e439970d","Type":"ContainerDied","Data":"6da68719a15ea42886b03e2b321b562377ca489a9a8128a2e8eeb35efc11eef4"} Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.770641 4981 scope.go:117] "RemoveContainer" containerID="f11af0fe2f067bdd8546479b54f3936d45147bcd6b90d3946ed7619fbc83c4a2" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.771668 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6cjfh" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.771739 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" event={"ID":"ef632318-2ac5-418d-b9d4-dcd616b4d768","Type":"ContainerDied","Data":"a448c9f845ec87d2a8605fe9b289d1aae554e87faa44a90e5c4eacda0155d07d"} Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.771768 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pcmgp" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.776515 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fzncx" event={"ID":"a8d010a2-1cec-4e71-ac60-29b2e20787f4","Type":"ContainerDied","Data":"7f92747db14f2414b7569e5d0c179f323a609473464cd4e87055d3fd461a6ee1"} Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.776578 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fzncx" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.782443 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6cjfh" podStartSLOduration=1.78242451 podStartE2EDuration="1.78242451s" podCreationTimestamp="2026-02-27 18:51:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:51:53.78180455 +0000 UTC m=+413.260585730" watchObservedRunningTime="2026-02-27 18:51:53.78242451 +0000 UTC m=+413.261205680" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.790075 4981 scope.go:117] "RemoveContainer" containerID="2b92f49cb085d48c1bffbfb2a0cdf19d6e45fa5ded19341bfc4979e34d19f42b" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.790427 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmtpf" event={"ID":"fbc8a428-3dab-402e-a105-0576aa196dcc","Type":"ContainerDied","Data":"77229a220fc51e35c71263d5d6eebc59af27f0abed475a3fa8bacb7a6f03f1c1"} Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.790531 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmtpf" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.794838 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6nhjk" event={"ID":"afecaba0-c366-4a2f-a944-1a282869a955","Type":"ContainerDied","Data":"3e7713dd77c0d3d4d940d135824afd0d7643c141d8d71b4e397a28d6ff4687a9"} Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.794905 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6nhjk" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.815946 4981 scope.go:117] "RemoveContainer" containerID="c34e40ab5ee7bba2243e1d19cb08fff5401ef6fcc74e4c7cb14b00fded1ffd44" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.834103 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fzncx"] Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.836960 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fzncx"] Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.847226 4981 scope.go:117] "RemoveContainer" containerID="34b5f6a6014362fa4ca77f038657ff566565bf3181688e1629ac15b25e8a622a" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.851005 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rmtpf"] Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.853039 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rmtpf"] Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.870427 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6nhjk"] Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.877638 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6nhjk"] Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.884658 4981 scope.go:117] "RemoveContainer" containerID="5f610ebf3095dd86017eb6aba0ff01b369c00986eb1dc58f0b78e391f5b9edf8" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.887819 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pcmgp"] Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.893359 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pcmgp"] Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.896568 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m9ppw"] Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.901923 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m9ppw"] Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.905749 4981 scope.go:117] "RemoveContainer" containerID="a4158cd2b02411cf681834c9aca49bf3abc94922ecbe24d05aae89ffce69799b" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.922278 4981 scope.go:117] "RemoveContainer" containerID="18aabbeb99493219fdc229ebdaac94be7dc1cedf4fa9236d8443cbb581db4a7c" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.937384 4981 scope.go:117] "RemoveContainer" containerID="513e8610a9893c11f22b17fcb728e24e054da4bdb201a345b051f58818455550" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.955445 4981 scope.go:117] "RemoveContainer" containerID="7f021a78e2685d6277851882e94f82c756da89b3f57b57016dcdbaca2ac2e671" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.971130 4981 scope.go:117] "RemoveContainer" containerID="83ad14d43913d2f995a1fd2af409f2e54b89ff582688cd775cab018c11970786" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.983337 4981 scope.go:117] "RemoveContainer" containerID="4a3bcf4a243f97dc697ec081553027686846d6aeee8302d0abf61274e95efeea" Feb 27 18:51:53 crc kubenswrapper[4981]: I0227 18:51:53.994829 4981 scope.go:117] "RemoveContainer" containerID="b50d9947f1dfaf25a613cad6d538fd399ea7de2d5ed075cb1a1750913030f342" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.006066 4981 scope.go:117] "RemoveContainer" containerID="6979c761e8a3730a041c26b778d8309d38f15a1adcdfa857211a452e93e43d93" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.599755 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2krnf"] Feb 27 18:51:54 crc kubenswrapper[4981]: E0227 18:51:54.600224 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afecaba0-c366-4a2f-a944-1a282869a955" containerName="extract-content" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.600245 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="afecaba0-c366-4a2f-a944-1a282869a955" containerName="extract-content" Feb 27 18:51:54 crc kubenswrapper[4981]: E0227 18:51:54.600264 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" containerName="extract-utilities" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.600276 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" containerName="extract-utilities" Feb 27 18:51:54 crc kubenswrapper[4981]: E0227 18:51:54.600290 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3bd579c-4d5b-496d-bade-9a78e439970d" containerName="extract-content" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.600303 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3bd579c-4d5b-496d-bade-9a78e439970d" containerName="extract-content" Feb 27 18:51:54 crc kubenswrapper[4981]: E0227 18:51:54.600320 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8d010a2-1cec-4e71-ac60-29b2e20787f4" containerName="extract-content" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.600331 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8d010a2-1cec-4e71-ac60-29b2e20787f4" containerName="extract-content" Feb 27 18:51:54 crc kubenswrapper[4981]: E0227 18:51:54.600343 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afecaba0-c366-4a2f-a944-1a282869a955" containerName="registry-server" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.600354 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="afecaba0-c366-4a2f-a944-1a282869a955" containerName="registry-server" Feb 27 18:51:54 crc kubenswrapper[4981]: E0227 18:51:54.600373 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8d010a2-1cec-4e71-ac60-29b2e20787f4" containerName="registry-server" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.600387 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8d010a2-1cec-4e71-ac60-29b2e20787f4" containerName="registry-server" Feb 27 18:51:54 crc kubenswrapper[4981]: E0227 18:51:54.600399 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8d010a2-1cec-4e71-ac60-29b2e20787f4" containerName="extract-utilities" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.600409 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8d010a2-1cec-4e71-ac60-29b2e20787f4" containerName="extract-utilities" Feb 27 18:51:54 crc kubenswrapper[4981]: E0227 18:51:54.600424 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef632318-2ac5-418d-b9d4-dcd616b4d768" containerName="marketplace-operator" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.600435 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef632318-2ac5-418d-b9d4-dcd616b4d768" containerName="marketplace-operator" Feb 27 18:51:54 crc kubenswrapper[4981]: E0227 18:51:54.600456 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3bd579c-4d5b-496d-bade-9a78e439970d" containerName="registry-server" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.600466 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3bd579c-4d5b-496d-bade-9a78e439970d" containerName="registry-server" Feb 27 18:51:54 crc kubenswrapper[4981]: E0227 18:51:54.600481 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" containerName="registry-server" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.600492 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" containerName="registry-server" Feb 27 18:51:54 crc kubenswrapper[4981]: E0227 18:51:54.600504 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3bd579c-4d5b-496d-bade-9a78e439970d" containerName="extract-utilities" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.600516 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3bd579c-4d5b-496d-bade-9a78e439970d" containerName="extract-utilities" Feb 27 18:51:54 crc kubenswrapper[4981]: E0227 18:51:54.600533 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" containerName="extract-content" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.600544 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" containerName="extract-content" Feb 27 18:51:54 crc kubenswrapper[4981]: E0227 18:51:54.600561 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afecaba0-c366-4a2f-a944-1a282869a955" containerName="extract-utilities" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.600570 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="afecaba0-c366-4a2f-a944-1a282869a955" containerName="extract-utilities" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.600707 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="afecaba0-c366-4a2f-a944-1a282869a955" containerName="registry-server" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.600737 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8d010a2-1cec-4e71-ac60-29b2e20787f4" containerName="registry-server" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.600754 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" containerName="registry-server" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.600770 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef632318-2ac5-418d-b9d4-dcd616b4d768" containerName="marketplace-operator" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.600783 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3bd579c-4d5b-496d-bade-9a78e439970d" containerName="registry-server" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.601963 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2krnf" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.605191 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.647878 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2krnf"] Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.755429 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxrm2\" (UniqueName: \"kubernetes.io/projected/ac79f530-9dad-40da-9fcb-d82a30bd8b57-kube-api-access-pxrm2\") pod \"certified-operators-2krnf\" (UID: \"ac79f530-9dad-40da-9fcb-d82a30bd8b57\") " pod="openshift-marketplace/certified-operators-2krnf" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.755486 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac79f530-9dad-40da-9fcb-d82a30bd8b57-utilities\") pod \"certified-operators-2krnf\" (UID: \"ac79f530-9dad-40da-9fcb-d82a30bd8b57\") " pod="openshift-marketplace/certified-operators-2krnf" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.755534 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac79f530-9dad-40da-9fcb-d82a30bd8b57-catalog-content\") pod \"certified-operators-2krnf\" (UID: \"ac79f530-9dad-40da-9fcb-d82a30bd8b57\") " pod="openshift-marketplace/certified-operators-2krnf" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.805190 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rb599"] Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.806841 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rb599" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.809964 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.814093 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rb599"] Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.856861 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac79f530-9dad-40da-9fcb-d82a30bd8b57-catalog-content\") pod \"certified-operators-2krnf\" (UID: \"ac79f530-9dad-40da-9fcb-d82a30bd8b57\") " pod="openshift-marketplace/certified-operators-2krnf" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.857257 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxrm2\" (UniqueName: \"kubernetes.io/projected/ac79f530-9dad-40da-9fcb-d82a30bd8b57-kube-api-access-pxrm2\") pod \"certified-operators-2krnf\" (UID: \"ac79f530-9dad-40da-9fcb-d82a30bd8b57\") " pod="openshift-marketplace/certified-operators-2krnf" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.857391 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac79f530-9dad-40da-9fcb-d82a30bd8b57-utilities\") pod \"certified-operators-2krnf\" (UID: \"ac79f530-9dad-40da-9fcb-d82a30bd8b57\") " pod="openshift-marketplace/certified-operators-2krnf" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.857819 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac79f530-9dad-40da-9fcb-d82a30bd8b57-catalog-content\") pod \"certified-operators-2krnf\" (UID: \"ac79f530-9dad-40da-9fcb-d82a30bd8b57\") " pod="openshift-marketplace/certified-operators-2krnf" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.857901 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac79f530-9dad-40da-9fcb-d82a30bd8b57-utilities\") pod \"certified-operators-2krnf\" (UID: \"ac79f530-9dad-40da-9fcb-d82a30bd8b57\") " pod="openshift-marketplace/certified-operators-2krnf" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.879949 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxrm2\" (UniqueName: \"kubernetes.io/projected/ac79f530-9dad-40da-9fcb-d82a30bd8b57-kube-api-access-pxrm2\") pod \"certified-operators-2krnf\" (UID: \"ac79f530-9dad-40da-9fcb-d82a30bd8b57\") " pod="openshift-marketplace/certified-operators-2krnf" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.958263 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2557a2d1-c08e-4a0a-b04e-a05aacf26465-utilities\") pod \"redhat-marketplace-rb599\" (UID: \"2557a2d1-c08e-4a0a-b04e-a05aacf26465\") " pod="openshift-marketplace/redhat-marketplace-rb599" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.958323 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4dvm\" (UniqueName: \"kubernetes.io/projected/2557a2d1-c08e-4a0a-b04e-a05aacf26465-kube-api-access-g4dvm\") pod \"redhat-marketplace-rb599\" (UID: \"2557a2d1-c08e-4a0a-b04e-a05aacf26465\") " pod="openshift-marketplace/redhat-marketplace-rb599" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.958361 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2557a2d1-c08e-4a0a-b04e-a05aacf26465-catalog-content\") pod \"redhat-marketplace-rb599\" (UID: \"2557a2d1-c08e-4a0a-b04e-a05aacf26465\") " pod="openshift-marketplace/redhat-marketplace-rb599" Feb 27 18:51:54 crc kubenswrapper[4981]: I0227 18:51:54.958542 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2krnf" Feb 27 18:51:55 crc kubenswrapper[4981]: I0227 18:51:55.059803 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2557a2d1-c08e-4a0a-b04e-a05aacf26465-utilities\") pod \"redhat-marketplace-rb599\" (UID: \"2557a2d1-c08e-4a0a-b04e-a05aacf26465\") " pod="openshift-marketplace/redhat-marketplace-rb599" Feb 27 18:51:55 crc kubenswrapper[4981]: I0227 18:51:55.060089 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4dvm\" (UniqueName: \"kubernetes.io/projected/2557a2d1-c08e-4a0a-b04e-a05aacf26465-kube-api-access-g4dvm\") pod \"redhat-marketplace-rb599\" (UID: \"2557a2d1-c08e-4a0a-b04e-a05aacf26465\") " pod="openshift-marketplace/redhat-marketplace-rb599" Feb 27 18:51:55 crc kubenswrapper[4981]: I0227 18:51:55.060130 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2557a2d1-c08e-4a0a-b04e-a05aacf26465-catalog-content\") pod \"redhat-marketplace-rb599\" (UID: \"2557a2d1-c08e-4a0a-b04e-a05aacf26465\") " pod="openshift-marketplace/redhat-marketplace-rb599" Feb 27 18:51:55 crc kubenswrapper[4981]: I0227 18:51:55.060727 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2557a2d1-c08e-4a0a-b04e-a05aacf26465-catalog-content\") pod \"redhat-marketplace-rb599\" (UID: \"2557a2d1-c08e-4a0a-b04e-a05aacf26465\") " pod="openshift-marketplace/redhat-marketplace-rb599" Feb 27 18:51:55 crc kubenswrapper[4981]: I0227 18:51:55.060755 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2557a2d1-c08e-4a0a-b04e-a05aacf26465-utilities\") pod \"redhat-marketplace-rb599\" (UID: \"2557a2d1-c08e-4a0a-b04e-a05aacf26465\") " pod="openshift-marketplace/redhat-marketplace-rb599" Feb 27 18:51:55 crc kubenswrapper[4981]: I0227 18:51:55.078700 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4dvm\" (UniqueName: \"kubernetes.io/projected/2557a2d1-c08e-4a0a-b04e-a05aacf26465-kube-api-access-g4dvm\") pod \"redhat-marketplace-rb599\" (UID: \"2557a2d1-c08e-4a0a-b04e-a05aacf26465\") " pod="openshift-marketplace/redhat-marketplace-rb599" Feb 27 18:51:55 crc kubenswrapper[4981]: I0227 18:51:55.120401 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rb599" Feb 27 18:51:55 crc kubenswrapper[4981]: I0227 18:51:55.373155 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2krnf"] Feb 27 18:51:55 crc kubenswrapper[4981]: W0227 18:51:55.389212 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac79f530_9dad_40da_9fcb_d82a30bd8b57.slice/crio-6b34858bc9b163c47b9fcbc24e8d178acbb9a5031b268d9b4cc60d931554eb83 WatchSource:0}: Error finding container 6b34858bc9b163c47b9fcbc24e8d178acbb9a5031b268d9b4cc60d931554eb83: Status 404 returned error can't find the container with id 6b34858bc9b163c47b9fcbc24e8d178acbb9a5031b268d9b4cc60d931554eb83 Feb 27 18:51:55 crc kubenswrapper[4981]: I0227 18:51:55.523149 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rb599"] Feb 27 18:51:55 crc kubenswrapper[4981]: I0227 18:51:55.635214 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8d010a2-1cec-4e71-ac60-29b2e20787f4" path="/var/lib/kubelet/pods/a8d010a2-1cec-4e71-ac60-29b2e20787f4/volumes" Feb 27 18:51:55 crc kubenswrapper[4981]: I0227 18:51:55.636609 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afecaba0-c366-4a2f-a944-1a282869a955" path="/var/lib/kubelet/pods/afecaba0-c366-4a2f-a944-1a282869a955/volumes" Feb 27 18:51:55 crc kubenswrapper[4981]: I0227 18:51:55.637781 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3bd579c-4d5b-496d-bade-9a78e439970d" path="/var/lib/kubelet/pods/e3bd579c-4d5b-496d-bade-9a78e439970d/volumes" Feb 27 18:51:55 crc kubenswrapper[4981]: I0227 18:51:55.640109 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef632318-2ac5-418d-b9d4-dcd616b4d768" path="/var/lib/kubelet/pods/ef632318-2ac5-418d-b9d4-dcd616b4d768/volumes" Feb 27 18:51:55 crc kubenswrapper[4981]: I0227 18:51:55.640983 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbc8a428-3dab-402e-a105-0576aa196dcc" path="/var/lib/kubelet/pods/fbc8a428-3dab-402e-a105-0576aa196dcc/volumes" Feb 27 18:51:55 crc kubenswrapper[4981]: I0227 18:51:55.819557 4981 generic.go:334] "Generic (PLEG): container finished" podID="ac79f530-9dad-40da-9fcb-d82a30bd8b57" containerID="f91eccf48619f7813d809b7b991d862352946384fb17773b6bdbbc276ee5310f" exitCode=0 Feb 27 18:51:55 crc kubenswrapper[4981]: I0227 18:51:55.819613 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2krnf" event={"ID":"ac79f530-9dad-40da-9fcb-d82a30bd8b57","Type":"ContainerDied","Data":"f91eccf48619f7813d809b7b991d862352946384fb17773b6bdbbc276ee5310f"} Feb 27 18:51:55 crc kubenswrapper[4981]: I0227 18:51:55.819690 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2krnf" event={"ID":"ac79f530-9dad-40da-9fcb-d82a30bd8b57","Type":"ContainerStarted","Data":"6b34858bc9b163c47b9fcbc24e8d178acbb9a5031b268d9b4cc60d931554eb83"} Feb 27 18:51:55 crc kubenswrapper[4981]: I0227 18:51:55.822264 4981 generic.go:334] "Generic (PLEG): container finished" podID="2557a2d1-c08e-4a0a-b04e-a05aacf26465" containerID="ef501414b87464b104320861452ad5131e6ba69d951a62b466629efa064653a8" exitCode=0 Feb 27 18:51:55 crc kubenswrapper[4981]: I0227 18:51:55.822338 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb599" event={"ID":"2557a2d1-c08e-4a0a-b04e-a05aacf26465","Type":"ContainerDied","Data":"ef501414b87464b104320861452ad5131e6ba69d951a62b466629efa064653a8"} Feb 27 18:51:55 crc kubenswrapper[4981]: I0227 18:51:55.822406 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb599" event={"ID":"2557a2d1-c08e-4a0a-b04e-a05aacf26465","Type":"ContainerStarted","Data":"16de004e60d2771810803ea15b7f8c77c4f8687c2347b50f9f5f59f7a5fdd737"} Feb 27 18:51:56 crc kubenswrapper[4981]: I0227 18:51:56.832761 4981 generic.go:334] "Generic (PLEG): container finished" podID="2557a2d1-c08e-4a0a-b04e-a05aacf26465" containerID="13ee4409759b80570ed979206704c02e2315a31e70a8b1fb33fac4d038d1de61" exitCode=0 Feb 27 18:51:56 crc kubenswrapper[4981]: I0227 18:51:56.832911 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb599" event={"ID":"2557a2d1-c08e-4a0a-b04e-a05aacf26465","Type":"ContainerDied","Data":"13ee4409759b80570ed979206704c02e2315a31e70a8b1fb33fac4d038d1de61"} Feb 27 18:51:56 crc kubenswrapper[4981]: I0227 18:51:56.835739 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2krnf" event={"ID":"ac79f530-9dad-40da-9fcb-d82a30bd8b57","Type":"ContainerStarted","Data":"fb458120a6a32b41b9c988e7c0f6d4a3825f4ba32fe722a821d0620982cf0426"} Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.007115 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-djqnq"] Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.009567 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-djqnq" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.013094 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.022535 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-djqnq"] Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.091141 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80d67677-93a5-4633-88fc-dde5d45e9756-catalog-content\") pod \"redhat-operators-djqnq\" (UID: \"80d67677-93a5-4633-88fc-dde5d45e9756\") " pod="openshift-marketplace/redhat-operators-djqnq" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.091375 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80d67677-93a5-4633-88fc-dde5d45e9756-utilities\") pod \"redhat-operators-djqnq\" (UID: \"80d67677-93a5-4633-88fc-dde5d45e9756\") " pod="openshift-marketplace/redhat-operators-djqnq" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.091618 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kcpg\" (UniqueName: \"kubernetes.io/projected/80d67677-93a5-4633-88fc-dde5d45e9756-kube-api-access-9kcpg\") pod \"redhat-operators-djqnq\" (UID: \"80d67677-93a5-4633-88fc-dde5d45e9756\") " pod="openshift-marketplace/redhat-operators-djqnq" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.192996 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80d67677-93a5-4633-88fc-dde5d45e9756-utilities\") pod \"redhat-operators-djqnq\" (UID: \"80d67677-93a5-4633-88fc-dde5d45e9756\") " pod="openshift-marketplace/redhat-operators-djqnq" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.193135 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kcpg\" (UniqueName: \"kubernetes.io/projected/80d67677-93a5-4633-88fc-dde5d45e9756-kube-api-access-9kcpg\") pod \"redhat-operators-djqnq\" (UID: \"80d67677-93a5-4633-88fc-dde5d45e9756\") " pod="openshift-marketplace/redhat-operators-djqnq" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.193206 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80d67677-93a5-4633-88fc-dde5d45e9756-catalog-content\") pod \"redhat-operators-djqnq\" (UID: \"80d67677-93a5-4633-88fc-dde5d45e9756\") " pod="openshift-marketplace/redhat-operators-djqnq" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.193804 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80d67677-93a5-4633-88fc-dde5d45e9756-catalog-content\") pod \"redhat-operators-djqnq\" (UID: \"80d67677-93a5-4633-88fc-dde5d45e9756\") " pod="openshift-marketplace/redhat-operators-djqnq" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.193813 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80d67677-93a5-4633-88fc-dde5d45e9756-utilities\") pod \"redhat-operators-djqnq\" (UID: \"80d67677-93a5-4633-88fc-dde5d45e9756\") " pod="openshift-marketplace/redhat-operators-djqnq" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.211234 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wc2tk"] Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.212970 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wc2tk" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.218353 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.220020 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wc2tk"] Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.232882 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kcpg\" (UniqueName: \"kubernetes.io/projected/80d67677-93a5-4633-88fc-dde5d45e9756-kube-api-access-9kcpg\") pod \"redhat-operators-djqnq\" (UID: \"80d67677-93a5-4633-88fc-dde5d45e9756\") " pod="openshift-marketplace/redhat-operators-djqnq" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.295422 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f98ae2b-e26f-4877-870b-93c73484de63-utilities\") pod \"community-operators-wc2tk\" (UID: \"9f98ae2b-e26f-4877-870b-93c73484de63\") " pod="openshift-marketplace/community-operators-wc2tk" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.295716 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7kb8\" (UniqueName: \"kubernetes.io/projected/9f98ae2b-e26f-4877-870b-93c73484de63-kube-api-access-l7kb8\") pod \"community-operators-wc2tk\" (UID: \"9f98ae2b-e26f-4877-870b-93c73484de63\") " pod="openshift-marketplace/community-operators-wc2tk" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.295776 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f98ae2b-e26f-4877-870b-93c73484de63-catalog-content\") pod \"community-operators-wc2tk\" (UID: \"9f98ae2b-e26f-4877-870b-93c73484de63\") " pod="openshift-marketplace/community-operators-wc2tk" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.338214 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-djqnq" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.396703 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f98ae2b-e26f-4877-870b-93c73484de63-utilities\") pod \"community-operators-wc2tk\" (UID: \"9f98ae2b-e26f-4877-870b-93c73484de63\") " pod="openshift-marketplace/community-operators-wc2tk" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.396983 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7kb8\" (UniqueName: \"kubernetes.io/projected/9f98ae2b-e26f-4877-870b-93c73484de63-kube-api-access-l7kb8\") pod \"community-operators-wc2tk\" (UID: \"9f98ae2b-e26f-4877-870b-93c73484de63\") " pod="openshift-marketplace/community-operators-wc2tk" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.397017 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f98ae2b-e26f-4877-870b-93c73484de63-catalog-content\") pod \"community-operators-wc2tk\" (UID: \"9f98ae2b-e26f-4877-870b-93c73484de63\") " pod="openshift-marketplace/community-operators-wc2tk" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.397552 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f98ae2b-e26f-4877-870b-93c73484de63-catalog-content\") pod \"community-operators-wc2tk\" (UID: \"9f98ae2b-e26f-4877-870b-93c73484de63\") " pod="openshift-marketplace/community-operators-wc2tk" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.397605 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f98ae2b-e26f-4877-870b-93c73484de63-utilities\") pod \"community-operators-wc2tk\" (UID: \"9f98ae2b-e26f-4877-870b-93c73484de63\") " pod="openshift-marketplace/community-operators-wc2tk" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.418547 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7kb8\" (UniqueName: \"kubernetes.io/projected/9f98ae2b-e26f-4877-870b-93c73484de63-kube-api-access-l7kb8\") pod \"community-operators-wc2tk\" (UID: \"9f98ae2b-e26f-4877-870b-93c73484de63\") " pod="openshift-marketplace/community-operators-wc2tk" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.572422 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wc2tk" Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.786393 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-djqnq"] Feb 27 18:51:57 crc kubenswrapper[4981]: W0227 18:51:57.793399 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80d67677_93a5_4633_88fc_dde5d45e9756.slice/crio-62f014978aa77b2020064e2cf38e8c7078e5550253c383f3f895996b07f8e45e WatchSource:0}: Error finding container 62f014978aa77b2020064e2cf38e8c7078e5550253c383f3f895996b07f8e45e: Status 404 returned error can't find the container with id 62f014978aa77b2020064e2cf38e8c7078e5550253c383f3f895996b07f8e45e Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.845742 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djqnq" event={"ID":"80d67677-93a5-4633-88fc-dde5d45e9756","Type":"ContainerStarted","Data":"62f014978aa77b2020064e2cf38e8c7078e5550253c383f3f895996b07f8e45e"} Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.860587 4981 generic.go:334] "Generic (PLEG): container finished" podID="ac79f530-9dad-40da-9fcb-d82a30bd8b57" containerID="fb458120a6a32b41b9c988e7c0f6d4a3825f4ba32fe722a821d0620982cf0426" exitCode=0 Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.860666 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2krnf" event={"ID":"ac79f530-9dad-40da-9fcb-d82a30bd8b57","Type":"ContainerDied","Data":"fb458120a6a32b41b9c988e7c0f6d4a3825f4ba32fe722a821d0620982cf0426"} Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.868376 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rb599" event={"ID":"2557a2d1-c08e-4a0a-b04e-a05aacf26465","Type":"ContainerStarted","Data":"aa39d1c609d8e9cb73b7a67227723798a8d01d04630b57fdc7baa7d8fa270bc2"} Feb 27 18:51:57 crc kubenswrapper[4981]: I0227 18:51:57.895287 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rb599" podStartSLOduration=2.466812009 podStartE2EDuration="3.895274603s" podCreationTimestamp="2026-02-27 18:51:54 +0000 UTC" firstStartedPulling="2026-02-27 18:51:55.825213661 +0000 UTC m=+415.303994861" lastFinishedPulling="2026-02-27 18:51:57.253676285 +0000 UTC m=+416.732457455" observedRunningTime="2026-02-27 18:51:57.893007471 +0000 UTC m=+417.371788631" watchObservedRunningTime="2026-02-27 18:51:57.895274603 +0000 UTC m=+417.374055773" Feb 27 18:51:58 crc kubenswrapper[4981]: I0227 18:51:58.031357 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wc2tk"] Feb 27 18:51:58 crc kubenswrapper[4981]: W0227 18:51:58.047538 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f98ae2b_e26f_4877_870b_93c73484de63.slice/crio-ecedb415fd28c89e365af6d853dc61bc5a27b4b34af0fa2d0fc09dcfa17da0b9 WatchSource:0}: Error finding container ecedb415fd28c89e365af6d853dc61bc5a27b4b34af0fa2d0fc09dcfa17da0b9: Status 404 returned error can't find the container with id ecedb415fd28c89e365af6d853dc61bc5a27b4b34af0fa2d0fc09dcfa17da0b9 Feb 27 18:51:58 crc kubenswrapper[4981]: I0227 18:51:58.874764 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2krnf" event={"ID":"ac79f530-9dad-40da-9fcb-d82a30bd8b57","Type":"ContainerStarted","Data":"10b9449a46a3a9ac3d6b2f873b5e18ec10ccbc0530a24840e01a358ca5b451ac"} Feb 27 18:51:58 crc kubenswrapper[4981]: I0227 18:51:58.876069 4981 generic.go:334] "Generic (PLEG): container finished" podID="80d67677-93a5-4633-88fc-dde5d45e9756" containerID="f8f981b179cca4015f93b313ddfd806242d17dc6f22c8bd5974ba09544bfc05e" exitCode=0 Feb 27 18:51:58 crc kubenswrapper[4981]: I0227 18:51:58.876138 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djqnq" event={"ID":"80d67677-93a5-4633-88fc-dde5d45e9756","Type":"ContainerDied","Data":"f8f981b179cca4015f93b313ddfd806242d17dc6f22c8bd5974ba09544bfc05e"} Feb 27 18:51:58 crc kubenswrapper[4981]: I0227 18:51:58.877715 4981 generic.go:334] "Generic (PLEG): container finished" podID="9f98ae2b-e26f-4877-870b-93c73484de63" containerID="bc24f51cebe28bfe551a10138b50534f65a2f55cb0460ca992510cb34f01ac2c" exitCode=0 Feb 27 18:51:58 crc kubenswrapper[4981]: I0227 18:51:58.877856 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wc2tk" event={"ID":"9f98ae2b-e26f-4877-870b-93c73484de63","Type":"ContainerDied","Data":"bc24f51cebe28bfe551a10138b50534f65a2f55cb0460ca992510cb34f01ac2c"} Feb 27 18:51:58 crc kubenswrapper[4981]: I0227 18:51:58.877875 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wc2tk" event={"ID":"9f98ae2b-e26f-4877-870b-93c73484de63","Type":"ContainerStarted","Data":"ecedb415fd28c89e365af6d853dc61bc5a27b4b34af0fa2d0fc09dcfa17da0b9"} Feb 27 18:51:58 crc kubenswrapper[4981]: I0227 18:51:58.892827 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2krnf" podStartSLOduration=2.44214063 podStartE2EDuration="4.892810925s" podCreationTimestamp="2026-02-27 18:51:54 +0000 UTC" firstStartedPulling="2026-02-27 18:51:55.821429301 +0000 UTC m=+415.300210451" lastFinishedPulling="2026-02-27 18:51:58.272099576 +0000 UTC m=+417.750880746" observedRunningTime="2026-02-27 18:51:58.890845342 +0000 UTC m=+418.369626502" watchObservedRunningTime="2026-02-27 18:51:58.892810925 +0000 UTC m=+418.371592085" Feb 27 18:51:59 crc kubenswrapper[4981]: I0227 18:51:59.884731 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djqnq" event={"ID":"80d67677-93a5-4633-88fc-dde5d45e9756","Type":"ContainerStarted","Data":"646477e30f8928d04020c7ff504fb1dd94efe91d9f2d20ffbc80cdf5e7d0c716"} Feb 27 18:51:59 crc kubenswrapper[4981]: I0227 18:51:59.886516 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wc2tk" event={"ID":"9f98ae2b-e26f-4877-870b-93c73484de63","Type":"ContainerStarted","Data":"db2b0d9a4318cea4584f8ed9918443bfe07d214edc30325f17cc87b310b85131"} Feb 27 18:52:00 crc kubenswrapper[4981]: I0227 18:52:00.141811 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536972-82z2q"] Feb 27 18:52:00 crc kubenswrapper[4981]: I0227 18:52:00.142870 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536972-82z2q" Feb 27 18:52:00 crc kubenswrapper[4981]: I0227 18:52:00.145486 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:52:00 crc kubenswrapper[4981]: I0227 18:52:00.145787 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:52:00 crc kubenswrapper[4981]: I0227 18:52:00.146107 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536972-82z2q"] Feb 27 18:52:00 crc kubenswrapper[4981]: I0227 18:52:00.147159 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 18:52:00 crc kubenswrapper[4981]: I0227 18:52:00.241308 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2wmf\" (UniqueName: \"kubernetes.io/projected/32be4401-faf4-4d5a-8d74-f787df8ae6da-kube-api-access-k2wmf\") pod \"auto-csr-approver-29536972-82z2q\" (UID: \"32be4401-faf4-4d5a-8d74-f787df8ae6da\") " pod="openshift-infra/auto-csr-approver-29536972-82z2q" Feb 27 18:52:00 crc kubenswrapper[4981]: I0227 18:52:00.342629 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2wmf\" (UniqueName: \"kubernetes.io/projected/32be4401-faf4-4d5a-8d74-f787df8ae6da-kube-api-access-k2wmf\") pod \"auto-csr-approver-29536972-82z2q\" (UID: \"32be4401-faf4-4d5a-8d74-f787df8ae6da\") " pod="openshift-infra/auto-csr-approver-29536972-82z2q" Feb 27 18:52:00 crc kubenswrapper[4981]: I0227 18:52:00.362210 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2wmf\" (UniqueName: \"kubernetes.io/projected/32be4401-faf4-4d5a-8d74-f787df8ae6da-kube-api-access-k2wmf\") pod \"auto-csr-approver-29536972-82z2q\" (UID: \"32be4401-faf4-4d5a-8d74-f787df8ae6da\") " pod="openshift-infra/auto-csr-approver-29536972-82z2q" Feb 27 18:52:00 crc kubenswrapper[4981]: I0227 18:52:00.491127 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536972-82z2q" Feb 27 18:52:00 crc kubenswrapper[4981]: I0227 18:52:00.724332 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536972-82z2q"] Feb 27 18:52:00 crc kubenswrapper[4981]: I0227 18:52:00.894906 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536972-82z2q" event={"ID":"32be4401-faf4-4d5a-8d74-f787df8ae6da","Type":"ContainerStarted","Data":"89a198381ac6ea4e42e768cd4907772696e9de4ee53b8fa7df4f834438961d4f"} Feb 27 18:52:00 crc kubenswrapper[4981]: I0227 18:52:00.899294 4981 generic.go:334] "Generic (PLEG): container finished" podID="80d67677-93a5-4633-88fc-dde5d45e9756" containerID="646477e30f8928d04020c7ff504fb1dd94efe91d9f2d20ffbc80cdf5e7d0c716" exitCode=0 Feb 27 18:52:00 crc kubenswrapper[4981]: I0227 18:52:00.899566 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djqnq" event={"ID":"80d67677-93a5-4633-88fc-dde5d45e9756","Type":"ContainerDied","Data":"646477e30f8928d04020c7ff504fb1dd94efe91d9f2d20ffbc80cdf5e7d0c716"} Feb 27 18:52:00 crc kubenswrapper[4981]: I0227 18:52:00.904840 4981 generic.go:334] "Generic (PLEG): container finished" podID="9f98ae2b-e26f-4877-870b-93c73484de63" containerID="db2b0d9a4318cea4584f8ed9918443bfe07d214edc30325f17cc87b310b85131" exitCode=0 Feb 27 18:52:00 crc kubenswrapper[4981]: I0227 18:52:00.904870 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wc2tk" event={"ID":"9f98ae2b-e26f-4877-870b-93c73484de63","Type":"ContainerDied","Data":"db2b0d9a4318cea4584f8ed9918443bfe07d214edc30325f17cc87b310b85131"} Feb 27 18:52:01 crc kubenswrapper[4981]: I0227 18:52:01.926682 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djqnq" event={"ID":"80d67677-93a5-4633-88fc-dde5d45e9756","Type":"ContainerStarted","Data":"5a3f18a3ab7c4eecdbd26b00cf576852ea6dd2f8f6016854f32f320633b1b788"} Feb 27 18:52:01 crc kubenswrapper[4981]: I0227 18:52:01.931324 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wc2tk" event={"ID":"9f98ae2b-e26f-4877-870b-93c73484de63","Type":"ContainerStarted","Data":"82bb4a9c13872d50c76f7ed0ee116cec48860fe475ac68208b71a86b70acf63c"} Feb 27 18:52:01 crc kubenswrapper[4981]: I0227 18:52:01.955963 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-djqnq" podStartSLOduration=3.488641566 podStartE2EDuration="5.955945338s" podCreationTimestamp="2026-02-27 18:51:56 +0000 UTC" firstStartedPulling="2026-02-27 18:51:58.877271133 +0000 UTC m=+418.356052303" lastFinishedPulling="2026-02-27 18:52:01.344574885 +0000 UTC m=+420.823356075" observedRunningTime="2026-02-27 18:52:01.951964222 +0000 UTC m=+421.430745392" watchObservedRunningTime="2026-02-27 18:52:01.955945338 +0000 UTC m=+421.434726508" Feb 27 18:52:01 crc kubenswrapper[4981]: I0227 18:52:01.970309 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wc2tk" podStartSLOduration=2.491965563 podStartE2EDuration="4.970293041s" podCreationTimestamp="2026-02-27 18:51:57 +0000 UTC" firstStartedPulling="2026-02-27 18:51:58.879250127 +0000 UTC m=+418.358031287" lastFinishedPulling="2026-02-27 18:52:01.357577565 +0000 UTC m=+420.836358765" observedRunningTime="2026-02-27 18:52:01.968829175 +0000 UTC m=+421.447610365" watchObservedRunningTime="2026-02-27 18:52:01.970293041 +0000 UTC m=+421.449074211" Feb 27 18:52:02 crc kubenswrapper[4981]: I0227 18:52:02.938032 4981 generic.go:334] "Generic (PLEG): container finished" podID="32be4401-faf4-4d5a-8d74-f787df8ae6da" containerID="8b47776555f382e2fa8c441c3f4f3efc21ebf796059f237851b8eb398c6d80ee" exitCode=0 Feb 27 18:52:02 crc kubenswrapper[4981]: I0227 18:52:02.939769 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536972-82z2q" event={"ID":"32be4401-faf4-4d5a-8d74-f787df8ae6da","Type":"ContainerDied","Data":"8b47776555f382e2fa8c441c3f4f3efc21ebf796059f237851b8eb398c6d80ee"} Feb 27 18:52:04 crc kubenswrapper[4981]: I0227 18:52:04.334460 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536972-82z2q" Feb 27 18:52:04 crc kubenswrapper[4981]: I0227 18:52:04.407719 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2wmf\" (UniqueName: \"kubernetes.io/projected/32be4401-faf4-4d5a-8d74-f787df8ae6da-kube-api-access-k2wmf\") pod \"32be4401-faf4-4d5a-8d74-f787df8ae6da\" (UID: \"32be4401-faf4-4d5a-8d74-f787df8ae6da\") " Feb 27 18:52:04 crc kubenswrapper[4981]: I0227 18:52:04.415899 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32be4401-faf4-4d5a-8d74-f787df8ae6da-kube-api-access-k2wmf" (OuterVolumeSpecName: "kube-api-access-k2wmf") pod "32be4401-faf4-4d5a-8d74-f787df8ae6da" (UID: "32be4401-faf4-4d5a-8d74-f787df8ae6da"). InnerVolumeSpecName "kube-api-access-k2wmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:52:04 crc kubenswrapper[4981]: I0227 18:52:04.509739 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2wmf\" (UniqueName: \"kubernetes.io/projected/32be4401-faf4-4d5a-8d74-f787df8ae6da-kube-api-access-k2wmf\") on node \"crc\" DevicePath \"\"" Feb 27 18:52:04 crc kubenswrapper[4981]: I0227 18:52:04.954329 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536972-82z2q" event={"ID":"32be4401-faf4-4d5a-8d74-f787df8ae6da","Type":"ContainerDied","Data":"89a198381ac6ea4e42e768cd4907772696e9de4ee53b8fa7df4f834438961d4f"} Feb 27 18:52:04 crc kubenswrapper[4981]: I0227 18:52:04.954389 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89a198381ac6ea4e42e768cd4907772696e9de4ee53b8fa7df4f834438961d4f" Feb 27 18:52:04 crc kubenswrapper[4981]: I0227 18:52:04.954466 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536972-82z2q" Feb 27 18:52:04 crc kubenswrapper[4981]: I0227 18:52:04.959550 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2krnf" Feb 27 18:52:04 crc kubenswrapper[4981]: I0227 18:52:04.959594 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2krnf" Feb 27 18:52:05 crc kubenswrapper[4981]: I0227 18:52:05.001217 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2krnf" Feb 27 18:52:05 crc kubenswrapper[4981]: I0227 18:52:05.053748 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" podUID="a86208a8-d898-447f-ba80-f6b72f601ef0" containerName="registry" containerID="cri-o://90d027bce3e451b330eca8f7d1d2351b902d31ab8ff3729cd87c53e9b4fb0313" gracePeriod=30 Feb 27 18:52:05 crc kubenswrapper[4981]: I0227 18:52:05.121509 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rb599" Feb 27 18:52:05 crc kubenswrapper[4981]: I0227 18:52:05.121603 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rb599" Feb 27 18:52:05 crc kubenswrapper[4981]: I0227 18:52:05.187996 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rb599" Feb 27 18:52:05 crc kubenswrapper[4981]: I0227 18:52:05.965278 4981 generic.go:334] "Generic (PLEG): container finished" podID="a86208a8-d898-447f-ba80-f6b72f601ef0" containerID="90d027bce3e451b330eca8f7d1d2351b902d31ab8ff3729cd87c53e9b4fb0313" exitCode=0 Feb 27 18:52:05 crc kubenswrapper[4981]: I0227 18:52:05.965481 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" event={"ID":"a86208a8-d898-447f-ba80-f6b72f601ef0","Type":"ContainerDied","Data":"90d027bce3e451b330eca8f7d1d2351b902d31ab8ff3729cd87c53e9b4fb0313"} Feb 27 18:52:06 crc kubenswrapper[4981]: I0227 18:52:06.032633 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2krnf" Feb 27 18:52:06 crc kubenswrapper[4981]: I0227 18:52:06.036033 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rb599" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.761859 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.864794 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a86208a8-d898-447f-ba80-f6b72f601ef0-registry-certificates\") pod \"a86208a8-d898-447f-ba80-f6b72f601ef0\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.865041 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"a86208a8-d898-447f-ba80-f6b72f601ef0\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.865089 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a86208a8-d898-447f-ba80-f6b72f601ef0-trusted-ca\") pod \"a86208a8-d898-447f-ba80-f6b72f601ef0\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.865124 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57gr5\" (UniqueName: \"kubernetes.io/projected/a86208a8-d898-447f-ba80-f6b72f601ef0-kube-api-access-57gr5\") pod \"a86208a8-d898-447f-ba80-f6b72f601ef0\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.865177 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a86208a8-d898-447f-ba80-f6b72f601ef0-installation-pull-secrets\") pod \"a86208a8-d898-447f-ba80-f6b72f601ef0\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.865195 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a86208a8-d898-447f-ba80-f6b72f601ef0-bound-sa-token\") pod \"a86208a8-d898-447f-ba80-f6b72f601ef0\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.865230 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a86208a8-d898-447f-ba80-f6b72f601ef0-registry-tls\") pod \"a86208a8-d898-447f-ba80-f6b72f601ef0\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.865250 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a86208a8-d898-447f-ba80-f6b72f601ef0-ca-trust-extracted\") pod \"a86208a8-d898-447f-ba80-f6b72f601ef0\" (UID: \"a86208a8-d898-447f-ba80-f6b72f601ef0\") " Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.866792 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a86208a8-d898-447f-ba80-f6b72f601ef0-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "a86208a8-d898-447f-ba80-f6b72f601ef0" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.868202 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a86208a8-d898-447f-ba80-f6b72f601ef0-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a86208a8-d898-447f-ba80-f6b72f601ef0" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.876880 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a86208a8-d898-447f-ba80-f6b72f601ef0-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "a86208a8-d898-447f-ba80-f6b72f601ef0" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.877005 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86208a8-d898-447f-ba80-f6b72f601ef0-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "a86208a8-d898-447f-ba80-f6b72f601ef0" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.877312 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a86208a8-d898-447f-ba80-f6b72f601ef0-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a86208a8-d898-447f-ba80-f6b72f601ef0" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.888765 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "a86208a8-d898-447f-ba80-f6b72f601ef0" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.894437 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a86208a8-d898-447f-ba80-f6b72f601ef0-kube-api-access-57gr5" (OuterVolumeSpecName: "kube-api-access-57gr5") pod "a86208a8-d898-447f-ba80-f6b72f601ef0" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0"). InnerVolumeSpecName "kube-api-access-57gr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.896200 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a86208a8-d898-447f-ba80-f6b72f601ef0-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "a86208a8-d898-447f-ba80-f6b72f601ef0" (UID: "a86208a8-d898-447f-ba80-f6b72f601ef0"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.966750 4981 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a86208a8-d898-447f-ba80-f6b72f601ef0-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.966779 4981 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a86208a8-d898-447f-ba80-f6b72f601ef0-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.966790 4981 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a86208a8-d898-447f-ba80-f6b72f601ef0-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.966799 4981 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a86208a8-d898-447f-ba80-f6b72f601ef0-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.966809 4981 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a86208a8-d898-447f-ba80-f6b72f601ef0-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.966817 4981 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a86208a8-d898-447f-ba80-f6b72f601ef0-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.966827 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57gr5\" (UniqueName: \"kubernetes.io/projected/a86208a8-d898-447f-ba80-f6b72f601ef0-kube-api-access-57gr5\") on node \"crc\" DevicePath \"\"" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.972335 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.975092 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-kmvhm" event={"ID":"a86208a8-d898-447f-ba80-f6b72f601ef0","Type":"ContainerDied","Data":"0c72032f4e55a5709092be874dfc48be5b5972c1270568275832b4361c786228"} Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.975126 4981 scope.go:117] "RemoveContainer" containerID="90d027bce3e451b330eca8f7d1d2351b902d31ab8ff3729cd87c53e9b4fb0313" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:06.999424 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kmvhm"] Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:07.003582 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-kmvhm"] Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:07.338499 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-djqnq" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:07.338782 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-djqnq" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:07.573123 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wc2tk" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:07.573174 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wc2tk" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:07.640288 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a86208a8-d898-447f-ba80-f6b72f601ef0" path="/var/lib/kubelet/pods/a86208a8-d898-447f-ba80-f6b72f601ef0/volumes" Feb 27 18:52:07 crc kubenswrapper[4981]: I0227 18:52:07.641570 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wc2tk" Feb 27 18:52:08 crc kubenswrapper[4981]: I0227 18:52:08.044421 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wc2tk" Feb 27 18:52:08 crc kubenswrapper[4981]: I0227 18:52:08.376974 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-djqnq" podUID="80d67677-93a5-4633-88fc-dde5d45e9756" containerName="registry-server" probeResult="failure" output=< Feb 27 18:52:08 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Feb 27 18:52:08 crc kubenswrapper[4981]: > Feb 27 18:52:17 crc kubenswrapper[4981]: I0227 18:52:17.412039 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-djqnq" Feb 27 18:52:17 crc kubenswrapper[4981]: I0227 18:52:17.500668 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-djqnq" Feb 27 18:52:20 crc kubenswrapper[4981]: I0227 18:52:20.249280 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:52:20 crc kubenswrapper[4981]: I0227 18:52:20.249384 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:52:50 crc kubenswrapper[4981]: I0227 18:52:50.248738 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:52:50 crc kubenswrapper[4981]: I0227 18:52:50.250162 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:52:50 crc kubenswrapper[4981]: I0227 18:52:50.250234 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 18:52:50 crc kubenswrapper[4981]: I0227 18:52:50.250948 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1cf48ba9e38f3906931ef155e1b3ac43296d5152b357073cef60a56716eb0e06"} pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 18:52:50 crc kubenswrapper[4981]: I0227 18:52:50.251047 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" containerID="cri-o://1cf48ba9e38f3906931ef155e1b3ac43296d5152b357073cef60a56716eb0e06" gracePeriod=600 Feb 27 18:52:51 crc kubenswrapper[4981]: I0227 18:52:51.316438 4981 generic.go:334] "Generic (PLEG): container finished" podID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerID="1cf48ba9e38f3906931ef155e1b3ac43296d5152b357073cef60a56716eb0e06" exitCode=0 Feb 27 18:52:51 crc kubenswrapper[4981]: I0227 18:52:51.316551 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerDied","Data":"1cf48ba9e38f3906931ef155e1b3ac43296d5152b357073cef60a56716eb0e06"} Feb 27 18:52:51 crc kubenswrapper[4981]: I0227 18:52:51.316867 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerStarted","Data":"89fcef99a6cc2e4da32ac5aa04eba1c3c9ab1397affdbd60a0b604c7e75c3649"} Feb 27 18:52:51 crc kubenswrapper[4981]: I0227 18:52:51.316902 4981 scope.go:117] "RemoveContainer" containerID="cc76d3ee6be937d8a9f4f3fcd4595f2ec304a7cda8d27799da2d9733389fe569" Feb 27 18:54:00 crc kubenswrapper[4981]: I0227 18:54:00.170724 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536974-gdsvz"] Feb 27 18:54:00 crc kubenswrapper[4981]: E0227 18:54:00.171822 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32be4401-faf4-4d5a-8d74-f787df8ae6da" containerName="oc" Feb 27 18:54:00 crc kubenswrapper[4981]: I0227 18:54:00.171846 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="32be4401-faf4-4d5a-8d74-f787df8ae6da" containerName="oc" Feb 27 18:54:00 crc kubenswrapper[4981]: E0227 18:54:00.171885 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a86208a8-d898-447f-ba80-f6b72f601ef0" containerName="registry" Feb 27 18:54:00 crc kubenswrapper[4981]: I0227 18:54:00.171898 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a86208a8-d898-447f-ba80-f6b72f601ef0" containerName="registry" Feb 27 18:54:00 crc kubenswrapper[4981]: I0227 18:54:00.172096 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="a86208a8-d898-447f-ba80-f6b72f601ef0" containerName="registry" Feb 27 18:54:00 crc kubenswrapper[4981]: I0227 18:54:00.172140 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="32be4401-faf4-4d5a-8d74-f787df8ae6da" containerName="oc" Feb 27 18:54:00 crc kubenswrapper[4981]: I0227 18:54:00.172811 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536974-gdsvz" Feb 27 18:54:00 crc kubenswrapper[4981]: I0227 18:54:00.176668 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536974-gdsvz"] Feb 27 18:54:00 crc kubenswrapper[4981]: I0227 18:54:00.177651 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:54:00 crc kubenswrapper[4981]: I0227 18:54:00.177712 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:54:00 crc kubenswrapper[4981]: I0227 18:54:00.177982 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 18:54:00 crc kubenswrapper[4981]: I0227 18:54:00.361123 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zb4k\" (UniqueName: \"kubernetes.io/projected/1172e92a-70af-4085-8fae-bc7fb4b3dba6-kube-api-access-8zb4k\") pod \"auto-csr-approver-29536974-gdsvz\" (UID: \"1172e92a-70af-4085-8fae-bc7fb4b3dba6\") " pod="openshift-infra/auto-csr-approver-29536974-gdsvz" Feb 27 18:54:00 crc kubenswrapper[4981]: I0227 18:54:00.462779 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zb4k\" (UniqueName: \"kubernetes.io/projected/1172e92a-70af-4085-8fae-bc7fb4b3dba6-kube-api-access-8zb4k\") pod \"auto-csr-approver-29536974-gdsvz\" (UID: \"1172e92a-70af-4085-8fae-bc7fb4b3dba6\") " pod="openshift-infra/auto-csr-approver-29536974-gdsvz" Feb 27 18:54:00 crc kubenswrapper[4981]: I0227 18:54:00.493808 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zb4k\" (UniqueName: \"kubernetes.io/projected/1172e92a-70af-4085-8fae-bc7fb4b3dba6-kube-api-access-8zb4k\") pod \"auto-csr-approver-29536974-gdsvz\" (UID: \"1172e92a-70af-4085-8fae-bc7fb4b3dba6\") " pod="openshift-infra/auto-csr-approver-29536974-gdsvz" Feb 27 18:54:00 crc kubenswrapper[4981]: I0227 18:54:00.498511 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536974-gdsvz" Feb 27 18:54:00 crc kubenswrapper[4981]: I0227 18:54:00.742601 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536974-gdsvz"] Feb 27 18:54:00 crc kubenswrapper[4981]: W0227 18:54:00.753169 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1172e92a_70af_4085_8fae_bc7fb4b3dba6.slice/crio-6e2ed8b388e294f1fb02a64a74cb6d04522934721156b89703962d1b03c7e255 WatchSource:0}: Error finding container 6e2ed8b388e294f1fb02a64a74cb6d04522934721156b89703962d1b03c7e255: Status 404 returned error can't find the container with id 6e2ed8b388e294f1fb02a64a74cb6d04522934721156b89703962d1b03c7e255 Feb 27 18:54:00 crc kubenswrapper[4981]: I0227 18:54:00.756791 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 18:54:00 crc kubenswrapper[4981]: I0227 18:54:00.830557 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536974-gdsvz" event={"ID":"1172e92a-70af-4085-8fae-bc7fb4b3dba6","Type":"ContainerStarted","Data":"6e2ed8b388e294f1fb02a64a74cb6d04522934721156b89703962d1b03c7e255"} Feb 27 18:54:02 crc kubenswrapper[4981]: I0227 18:54:02.849683 4981 generic.go:334] "Generic (PLEG): container finished" podID="1172e92a-70af-4085-8fae-bc7fb4b3dba6" containerID="778eca494f075c4eccaca83613a76eb0e2d323cd1b0b7567006cb80651f9953d" exitCode=0 Feb 27 18:54:02 crc kubenswrapper[4981]: I0227 18:54:02.849799 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536974-gdsvz" event={"ID":"1172e92a-70af-4085-8fae-bc7fb4b3dba6","Type":"ContainerDied","Data":"778eca494f075c4eccaca83613a76eb0e2d323cd1b0b7567006cb80651f9953d"} Feb 27 18:54:04 crc kubenswrapper[4981]: I0227 18:54:04.215908 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536974-gdsvz" Feb 27 18:54:04 crc kubenswrapper[4981]: I0227 18:54:04.411957 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zb4k\" (UniqueName: \"kubernetes.io/projected/1172e92a-70af-4085-8fae-bc7fb4b3dba6-kube-api-access-8zb4k\") pod \"1172e92a-70af-4085-8fae-bc7fb4b3dba6\" (UID: \"1172e92a-70af-4085-8fae-bc7fb4b3dba6\") " Feb 27 18:54:04 crc kubenswrapper[4981]: I0227 18:54:04.420208 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1172e92a-70af-4085-8fae-bc7fb4b3dba6-kube-api-access-8zb4k" (OuterVolumeSpecName: "kube-api-access-8zb4k") pod "1172e92a-70af-4085-8fae-bc7fb4b3dba6" (UID: "1172e92a-70af-4085-8fae-bc7fb4b3dba6"). InnerVolumeSpecName "kube-api-access-8zb4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:54:04 crc kubenswrapper[4981]: I0227 18:54:04.513208 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zb4k\" (UniqueName: \"kubernetes.io/projected/1172e92a-70af-4085-8fae-bc7fb4b3dba6-kube-api-access-8zb4k\") on node \"crc\" DevicePath \"\"" Feb 27 18:54:04 crc kubenswrapper[4981]: I0227 18:54:04.867525 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536974-gdsvz" event={"ID":"1172e92a-70af-4085-8fae-bc7fb4b3dba6","Type":"ContainerDied","Data":"6e2ed8b388e294f1fb02a64a74cb6d04522934721156b89703962d1b03c7e255"} Feb 27 18:54:04 crc kubenswrapper[4981]: I0227 18:54:04.867580 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e2ed8b388e294f1fb02a64a74cb6d04522934721156b89703962d1b03c7e255" Feb 27 18:54:04 crc kubenswrapper[4981]: I0227 18:54:04.867608 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536974-gdsvz" Feb 27 18:54:05 crc kubenswrapper[4981]: I0227 18:54:05.296211 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536968-jn8tc"] Feb 27 18:54:05 crc kubenswrapper[4981]: I0227 18:54:05.303202 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536968-jn8tc"] Feb 27 18:54:05 crc kubenswrapper[4981]: I0227 18:54:05.648682 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bff5a34-e6d7-482d-bed3-dfe5269b225a" path="/var/lib/kubelet/pods/8bff5a34-e6d7-482d-bed3-dfe5269b225a/volumes" Feb 27 18:54:50 crc kubenswrapper[4981]: I0227 18:54:50.251401 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:54:50 crc kubenswrapper[4981]: I0227 18:54:50.252406 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:55:02 crc kubenswrapper[4981]: I0227 18:55:02.362830 4981 scope.go:117] "RemoveContainer" containerID="faa5024f7d10e3c37f7183d7a6b6c9555a92f0f23d606e016f8b33db40afbe15" Feb 27 18:55:02 crc kubenswrapper[4981]: I0227 18:55:02.411941 4981 scope.go:117] "RemoveContainer" containerID="d23ad5845416195b877daf6663e39cdd880e7818c5bde1b9c6b221e7cc43728f" Feb 27 18:55:20 crc kubenswrapper[4981]: I0227 18:55:20.249514 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:55:20 crc kubenswrapper[4981]: I0227 18:55:20.250325 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:55:50 crc kubenswrapper[4981]: I0227 18:55:50.249550 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:55:50 crc kubenswrapper[4981]: I0227 18:55:50.250330 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:55:50 crc kubenswrapper[4981]: I0227 18:55:50.250396 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 18:55:50 crc kubenswrapper[4981]: I0227 18:55:50.251141 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89fcef99a6cc2e4da32ac5aa04eba1c3c9ab1397affdbd60a0b604c7e75c3649"} pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 18:55:50 crc kubenswrapper[4981]: I0227 18:55:50.251238 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" containerID="cri-o://89fcef99a6cc2e4da32ac5aa04eba1c3c9ab1397affdbd60a0b604c7e75c3649" gracePeriod=600 Feb 27 18:55:50 crc kubenswrapper[4981]: I0227 18:55:50.623821 4981 generic.go:334] "Generic (PLEG): container finished" podID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerID="89fcef99a6cc2e4da32ac5aa04eba1c3c9ab1397affdbd60a0b604c7e75c3649" exitCode=0 Feb 27 18:55:50 crc kubenswrapper[4981]: I0227 18:55:50.623920 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerDied","Data":"89fcef99a6cc2e4da32ac5aa04eba1c3c9ab1397affdbd60a0b604c7e75c3649"} Feb 27 18:55:50 crc kubenswrapper[4981]: I0227 18:55:50.624309 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerStarted","Data":"7d611e423ab1d303ac9796cb0e04da4b0a780cfed24b834c8ebeafc14a8a6963"} Feb 27 18:55:50 crc kubenswrapper[4981]: I0227 18:55:50.624345 4981 scope.go:117] "RemoveContainer" containerID="1cf48ba9e38f3906931ef155e1b3ac43296d5152b357073cef60a56716eb0e06" Feb 27 18:56:00 crc kubenswrapper[4981]: I0227 18:56:00.142206 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536976-w4mmk"] Feb 27 18:56:00 crc kubenswrapper[4981]: E0227 18:56:00.143137 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1172e92a-70af-4085-8fae-bc7fb4b3dba6" containerName="oc" Feb 27 18:56:00 crc kubenswrapper[4981]: I0227 18:56:00.143158 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="1172e92a-70af-4085-8fae-bc7fb4b3dba6" containerName="oc" Feb 27 18:56:00 crc kubenswrapper[4981]: I0227 18:56:00.143322 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="1172e92a-70af-4085-8fae-bc7fb4b3dba6" containerName="oc" Feb 27 18:56:00 crc kubenswrapper[4981]: I0227 18:56:00.143850 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536976-w4mmk" Feb 27 18:56:00 crc kubenswrapper[4981]: I0227 18:56:00.146211 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:56:00 crc kubenswrapper[4981]: I0227 18:56:00.146708 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:56:00 crc kubenswrapper[4981]: I0227 18:56:00.147481 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 18:56:00 crc kubenswrapper[4981]: I0227 18:56:00.151400 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536976-w4mmk"] Feb 27 18:56:00 crc kubenswrapper[4981]: I0227 18:56:00.245564 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgdr2\" (UniqueName: \"kubernetes.io/projected/53b9d1d4-db23-486c-9a1f-9ff21fc7b802-kube-api-access-xgdr2\") pod \"auto-csr-approver-29536976-w4mmk\" (UID: \"53b9d1d4-db23-486c-9a1f-9ff21fc7b802\") " pod="openshift-infra/auto-csr-approver-29536976-w4mmk" Feb 27 18:56:00 crc kubenswrapper[4981]: I0227 18:56:00.347192 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgdr2\" (UniqueName: \"kubernetes.io/projected/53b9d1d4-db23-486c-9a1f-9ff21fc7b802-kube-api-access-xgdr2\") pod \"auto-csr-approver-29536976-w4mmk\" (UID: \"53b9d1d4-db23-486c-9a1f-9ff21fc7b802\") " pod="openshift-infra/auto-csr-approver-29536976-w4mmk" Feb 27 18:56:00 crc kubenswrapper[4981]: I0227 18:56:00.372925 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgdr2\" (UniqueName: \"kubernetes.io/projected/53b9d1d4-db23-486c-9a1f-9ff21fc7b802-kube-api-access-xgdr2\") pod \"auto-csr-approver-29536976-w4mmk\" (UID: \"53b9d1d4-db23-486c-9a1f-9ff21fc7b802\") " pod="openshift-infra/auto-csr-approver-29536976-w4mmk" Feb 27 18:56:00 crc kubenswrapper[4981]: I0227 18:56:00.498248 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536976-w4mmk" Feb 27 18:56:00 crc kubenswrapper[4981]: I0227 18:56:00.769737 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536976-w4mmk"] Feb 27 18:56:01 crc kubenswrapper[4981]: I0227 18:56:01.704196 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536976-w4mmk" event={"ID":"53b9d1d4-db23-486c-9a1f-9ff21fc7b802","Type":"ContainerStarted","Data":"b2d8ae5794747c051ba730d60d080bfb60a2d1ebbc2732a80790a11460dbc067"} Feb 27 18:56:02 crc kubenswrapper[4981]: I0227 18:56:02.713827 4981 generic.go:334] "Generic (PLEG): container finished" podID="53b9d1d4-db23-486c-9a1f-9ff21fc7b802" containerID="f2103bd39e5ce4b4891daf6da2f76cc1df1178b6b341a96b23cde6cf19513719" exitCode=0 Feb 27 18:56:02 crc kubenswrapper[4981]: I0227 18:56:02.713915 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536976-w4mmk" event={"ID":"53b9d1d4-db23-486c-9a1f-9ff21fc7b802","Type":"ContainerDied","Data":"f2103bd39e5ce4b4891daf6da2f76cc1df1178b6b341a96b23cde6cf19513719"} Feb 27 18:56:04 crc kubenswrapper[4981]: I0227 18:56:04.067312 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536976-w4mmk" Feb 27 18:56:04 crc kubenswrapper[4981]: I0227 18:56:04.203360 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgdr2\" (UniqueName: \"kubernetes.io/projected/53b9d1d4-db23-486c-9a1f-9ff21fc7b802-kube-api-access-xgdr2\") pod \"53b9d1d4-db23-486c-9a1f-9ff21fc7b802\" (UID: \"53b9d1d4-db23-486c-9a1f-9ff21fc7b802\") " Feb 27 18:56:04 crc kubenswrapper[4981]: I0227 18:56:04.211664 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53b9d1d4-db23-486c-9a1f-9ff21fc7b802-kube-api-access-xgdr2" (OuterVolumeSpecName: "kube-api-access-xgdr2") pod "53b9d1d4-db23-486c-9a1f-9ff21fc7b802" (UID: "53b9d1d4-db23-486c-9a1f-9ff21fc7b802"). InnerVolumeSpecName "kube-api-access-xgdr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:56:04 crc kubenswrapper[4981]: I0227 18:56:04.305955 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgdr2\" (UniqueName: \"kubernetes.io/projected/53b9d1d4-db23-486c-9a1f-9ff21fc7b802-kube-api-access-xgdr2\") on node \"crc\" DevicePath \"\"" Feb 27 18:56:04 crc kubenswrapper[4981]: I0227 18:56:04.729215 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536976-w4mmk" event={"ID":"53b9d1d4-db23-486c-9a1f-9ff21fc7b802","Type":"ContainerDied","Data":"b2d8ae5794747c051ba730d60d080bfb60a2d1ebbc2732a80790a11460dbc067"} Feb 27 18:56:04 crc kubenswrapper[4981]: I0227 18:56:04.729267 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2d8ae5794747c051ba730d60d080bfb60a2d1ebbc2732a80790a11460dbc067" Feb 27 18:56:04 crc kubenswrapper[4981]: I0227 18:56:04.729298 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536976-w4mmk" Feb 27 18:56:05 crc kubenswrapper[4981]: I0227 18:56:05.149775 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536970-cm5kp"] Feb 27 18:56:05 crc kubenswrapper[4981]: I0227 18:56:05.156551 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536970-cm5kp"] Feb 27 18:56:05 crc kubenswrapper[4981]: I0227 18:56:05.639586 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d53422-a900-419e-8027-602fa5b1401f" path="/var/lib/kubelet/pods/79d53422-a900-419e-8027-602fa5b1401f/volumes" Feb 27 18:57:02 crc kubenswrapper[4981]: I0227 18:57:02.488690 4981 scope.go:117] "RemoveContainer" containerID="8b3186e9c59609c7476c71ccd23c8acfb64733bc1763c14cdb0a0dc4efc5772f" Feb 27 18:57:50 crc kubenswrapper[4981]: I0227 18:57:50.248800 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:57:50 crc kubenswrapper[4981]: I0227 18:57:50.249688 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:58:00 crc kubenswrapper[4981]: I0227 18:58:00.143258 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536978-drs27"] Feb 27 18:58:00 crc kubenswrapper[4981]: E0227 18:58:00.144478 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53b9d1d4-db23-486c-9a1f-9ff21fc7b802" containerName="oc" Feb 27 18:58:00 crc kubenswrapper[4981]: I0227 18:58:00.144502 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="53b9d1d4-db23-486c-9a1f-9ff21fc7b802" containerName="oc" Feb 27 18:58:00 crc kubenswrapper[4981]: I0227 18:58:00.144700 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="53b9d1d4-db23-486c-9a1f-9ff21fc7b802" containerName="oc" Feb 27 18:58:00 crc kubenswrapper[4981]: I0227 18:58:00.145401 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536978-drs27" Feb 27 18:58:00 crc kubenswrapper[4981]: I0227 18:58:00.151521 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 18:58:00 crc kubenswrapper[4981]: I0227 18:58:00.151598 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:58:00 crc kubenswrapper[4981]: I0227 18:58:00.151628 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:58:00 crc kubenswrapper[4981]: I0227 18:58:00.162780 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536978-drs27"] Feb 27 18:58:00 crc kubenswrapper[4981]: I0227 18:58:00.262297 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb4q2\" (UniqueName: \"kubernetes.io/projected/7acbd9d5-f113-4fdc-8ee8-02a2df5d840e-kube-api-access-vb4q2\") pod \"auto-csr-approver-29536978-drs27\" (UID: \"7acbd9d5-f113-4fdc-8ee8-02a2df5d840e\") " pod="openshift-infra/auto-csr-approver-29536978-drs27" Feb 27 18:58:00 crc kubenswrapper[4981]: I0227 18:58:00.364207 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb4q2\" (UniqueName: \"kubernetes.io/projected/7acbd9d5-f113-4fdc-8ee8-02a2df5d840e-kube-api-access-vb4q2\") pod \"auto-csr-approver-29536978-drs27\" (UID: \"7acbd9d5-f113-4fdc-8ee8-02a2df5d840e\") " pod="openshift-infra/auto-csr-approver-29536978-drs27" Feb 27 18:58:00 crc kubenswrapper[4981]: I0227 18:58:00.397727 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb4q2\" (UniqueName: \"kubernetes.io/projected/7acbd9d5-f113-4fdc-8ee8-02a2df5d840e-kube-api-access-vb4q2\") pod \"auto-csr-approver-29536978-drs27\" (UID: \"7acbd9d5-f113-4fdc-8ee8-02a2df5d840e\") " pod="openshift-infra/auto-csr-approver-29536978-drs27" Feb 27 18:58:00 crc kubenswrapper[4981]: I0227 18:58:00.493158 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536978-drs27" Feb 27 18:58:00 crc kubenswrapper[4981]: I0227 18:58:00.760243 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536978-drs27"] Feb 27 18:58:00 crc kubenswrapper[4981]: W0227 18:58:00.772537 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7acbd9d5_f113_4fdc_8ee8_02a2df5d840e.slice/crio-f2c63768cbf6a418a0b94abbb189d8d8a6dce18dabba65539f4ebf0969c51de1 WatchSource:0}: Error finding container f2c63768cbf6a418a0b94abbb189d8d8a6dce18dabba65539f4ebf0969c51de1: Status 404 returned error can't find the container with id f2c63768cbf6a418a0b94abbb189d8d8a6dce18dabba65539f4ebf0969c51de1 Feb 27 18:58:01 crc kubenswrapper[4981]: I0227 18:58:01.539417 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536978-drs27" event={"ID":"7acbd9d5-f113-4fdc-8ee8-02a2df5d840e","Type":"ContainerStarted","Data":"f2c63768cbf6a418a0b94abbb189d8d8a6dce18dabba65539f4ebf0969c51de1"} Feb 27 18:58:02 crc kubenswrapper[4981]: I0227 18:58:02.544677 4981 generic.go:334] "Generic (PLEG): container finished" podID="7acbd9d5-f113-4fdc-8ee8-02a2df5d840e" containerID="610278bafe9213e09e992a980ac04a61586d101e3473c12e3fbaf4e3115972de" exitCode=0 Feb 27 18:58:02 crc kubenswrapper[4981]: I0227 18:58:02.544726 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536978-drs27" event={"ID":"7acbd9d5-f113-4fdc-8ee8-02a2df5d840e","Type":"ContainerDied","Data":"610278bafe9213e09e992a980ac04a61586d101e3473c12e3fbaf4e3115972de"} Feb 27 18:58:03 crc kubenswrapper[4981]: I0227 18:58:03.947270 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536978-drs27" Feb 27 18:58:04 crc kubenswrapper[4981]: I0227 18:58:04.116526 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb4q2\" (UniqueName: \"kubernetes.io/projected/7acbd9d5-f113-4fdc-8ee8-02a2df5d840e-kube-api-access-vb4q2\") pod \"7acbd9d5-f113-4fdc-8ee8-02a2df5d840e\" (UID: \"7acbd9d5-f113-4fdc-8ee8-02a2df5d840e\") " Feb 27 18:58:04 crc kubenswrapper[4981]: I0227 18:58:04.125323 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7acbd9d5-f113-4fdc-8ee8-02a2df5d840e-kube-api-access-vb4q2" (OuterVolumeSpecName: "kube-api-access-vb4q2") pod "7acbd9d5-f113-4fdc-8ee8-02a2df5d840e" (UID: "7acbd9d5-f113-4fdc-8ee8-02a2df5d840e"). InnerVolumeSpecName "kube-api-access-vb4q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:58:04 crc kubenswrapper[4981]: I0227 18:58:04.218997 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb4q2\" (UniqueName: \"kubernetes.io/projected/7acbd9d5-f113-4fdc-8ee8-02a2df5d840e-kube-api-access-vb4q2\") on node \"crc\" DevicePath \"\"" Feb 27 18:58:04 crc kubenswrapper[4981]: I0227 18:58:04.560166 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536978-drs27" event={"ID":"7acbd9d5-f113-4fdc-8ee8-02a2df5d840e","Type":"ContainerDied","Data":"f2c63768cbf6a418a0b94abbb189d8d8a6dce18dabba65539f4ebf0969c51de1"} Feb 27 18:58:04 crc kubenswrapper[4981]: I0227 18:58:04.560226 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2c63768cbf6a418a0b94abbb189d8d8a6dce18dabba65539f4ebf0969c51de1" Feb 27 18:58:04 crc kubenswrapper[4981]: I0227 18:58:04.560241 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536978-drs27" Feb 27 18:58:05 crc kubenswrapper[4981]: I0227 18:58:05.019833 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536972-82z2q"] Feb 27 18:58:05 crc kubenswrapper[4981]: I0227 18:58:05.027515 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536972-82z2q"] Feb 27 18:58:05 crc kubenswrapper[4981]: I0227 18:58:05.641699 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32be4401-faf4-4d5a-8d74-f787df8ae6da" path="/var/lib/kubelet/pods/32be4401-faf4-4d5a-8d74-f787df8ae6da/volumes" Feb 27 18:58:06 crc kubenswrapper[4981]: I0227 18:58:06.749992 4981 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 18:58:20 crc kubenswrapper[4981]: I0227 18:58:20.249657 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:58:20 crc kubenswrapper[4981]: I0227 18:58:20.250369 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:58:50 crc kubenswrapper[4981]: I0227 18:58:50.249135 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:58:50 crc kubenswrapper[4981]: I0227 18:58:50.249976 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:58:50 crc kubenswrapper[4981]: I0227 18:58:50.250041 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 18:58:50 crc kubenswrapper[4981]: I0227 18:58:50.250890 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d611e423ab1d303ac9796cb0e04da4b0a780cfed24b834c8ebeafc14a8a6963"} pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 18:58:50 crc kubenswrapper[4981]: I0227 18:58:50.250986 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" containerID="cri-o://7d611e423ab1d303ac9796cb0e04da4b0a780cfed24b834c8ebeafc14a8a6963" gracePeriod=600 Feb 27 18:58:50 crc kubenswrapper[4981]: I0227 18:58:50.900531 4981 generic.go:334] "Generic (PLEG): container finished" podID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerID="7d611e423ab1d303ac9796cb0e04da4b0a780cfed24b834c8ebeafc14a8a6963" exitCode=0 Feb 27 18:58:50 crc kubenswrapper[4981]: I0227 18:58:50.901099 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerDied","Data":"7d611e423ab1d303ac9796cb0e04da4b0a780cfed24b834c8ebeafc14a8a6963"} Feb 27 18:58:50 crc kubenswrapper[4981]: I0227 18:58:50.901155 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerStarted","Data":"219fa48bb79b5cd44ef23b0ba5b266e3305b85445a083e120d72a5d185159bb6"} Feb 27 18:58:50 crc kubenswrapper[4981]: I0227 18:58:50.901195 4981 scope.go:117] "RemoveContainer" containerID="89fcef99a6cc2e4da32ac5aa04eba1c3c9ab1397affdbd60a0b604c7e75c3649" Feb 27 18:58:59 crc kubenswrapper[4981]: I0227 18:58:59.777538 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6rlwn"] Feb 27 18:58:59 crc kubenswrapper[4981]: I0227 18:58:59.778608 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovn-controller" containerID="cri-o://5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61" gracePeriod=30 Feb 27 18:58:59 crc kubenswrapper[4981]: I0227 18:58:59.778680 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="nbdb" containerID="cri-o://bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39" gracePeriod=30 Feb 27 18:58:59 crc kubenswrapper[4981]: I0227 18:58:59.778734 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="northd" containerID="cri-o://082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501" gracePeriod=30 Feb 27 18:58:59 crc kubenswrapper[4981]: I0227 18:58:59.778777 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444" gracePeriod=30 Feb 27 18:58:59 crc kubenswrapper[4981]: I0227 18:58:59.778809 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="kube-rbac-proxy-node" containerID="cri-o://02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68" gracePeriod=30 Feb 27 18:58:59 crc kubenswrapper[4981]: I0227 18:58:59.778849 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovn-acl-logging" containerID="cri-o://0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf" gracePeriod=30 Feb 27 18:58:59 crc kubenswrapper[4981]: I0227 18:58:59.779168 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="sbdb" containerID="cri-o://bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e" gracePeriod=30 Feb 27 18:58:59 crc kubenswrapper[4981]: I0227 18:58:59.839722 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovnkube-controller" containerID="cri-o://3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae" gracePeriod=30 Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.056813 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-992xv_2f03f89e-d428-4246-a710-23c47810b60e/kube-multus/1.log" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.057556 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-992xv_2f03f89e-d428-4246-a710-23c47810b60e/kube-multus/0.log" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.057595 4981 generic.go:334] "Generic (PLEG): container finished" podID="2f03f89e-d428-4246-a710-23c47810b60e" containerID="511d08431fde2a2ed342b5d8c934dc8915b5ac7d408321b7898626b3ea30becf" exitCode=2 Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.057648 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-992xv" event={"ID":"2f03f89e-d428-4246-a710-23c47810b60e","Type":"ContainerDied","Data":"511d08431fde2a2ed342b5d8c934dc8915b5ac7d408321b7898626b3ea30becf"} Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.057689 4981 scope.go:117] "RemoveContainer" containerID="624b40ad8001a295726242c917ee182224a8fb2fe0077a0129531a28c9e2209c" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.058230 4981 scope.go:117] "RemoveContainer" containerID="511d08431fde2a2ed342b5d8c934dc8915b5ac7d408321b7898626b3ea30becf" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.060792 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlwn_0918866b-8c49-4332-bb4d-bea02b35f047/ovnkube-controller/2.log" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.066397 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlwn_0918866b-8c49-4332-bb4d-bea02b35f047/ovn-acl-logging/0.log" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.069921 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlwn_0918866b-8c49-4332-bb4d-bea02b35f047/ovn-controller/0.log" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.070427 4981 generic.go:334] "Generic (PLEG): container finished" podID="0918866b-8c49-4332-bb4d-bea02b35f047" containerID="3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae" exitCode=0 Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.070468 4981 generic.go:334] "Generic (PLEG): container finished" podID="0918866b-8c49-4332-bb4d-bea02b35f047" containerID="e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444" exitCode=0 Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.070478 4981 generic.go:334] "Generic (PLEG): container finished" podID="0918866b-8c49-4332-bb4d-bea02b35f047" containerID="02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68" exitCode=0 Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.070490 4981 generic.go:334] "Generic (PLEG): container finished" podID="0918866b-8c49-4332-bb4d-bea02b35f047" containerID="0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf" exitCode=143 Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.070500 4981 generic.go:334] "Generic (PLEG): container finished" podID="0918866b-8c49-4332-bb4d-bea02b35f047" containerID="5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61" exitCode=143 Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.070522 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerDied","Data":"3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae"} Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.070555 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerDied","Data":"e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444"} Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.070570 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerDied","Data":"02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68"} Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.070585 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerDied","Data":"0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf"} Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.070598 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerDied","Data":"5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61"} Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.235688 4981 scope.go:117] "RemoveContainer" containerID="f97a12e1711e303ddf5cac918727a315715716c2b2e4239a36f592d45b89bad0" Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.660468 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e is running failed: container process not found" containerID="bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.660695 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39 is running failed: container process not found" containerID="bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.661483 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e is running failed: container process not found" containerID="bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.661487 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39 is running failed: container process not found" containerID="bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.661863 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e is running failed: container process not found" containerID="bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.661869 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39 is running failed: container process not found" containerID="bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.661903 4981 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="sbdb" Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.661909 4981 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="nbdb" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.713287 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlwn_0918866b-8c49-4332-bb4d-bea02b35f047/ovn-acl-logging/0.log" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.713991 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlwn_0918866b-8c49-4332-bb4d-bea02b35f047/ovn-controller/0.log" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.714654 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794321 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-b6465"] Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.794535 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="kube-rbac-proxy-ovn-metrics" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794549 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="kube-rbac-proxy-ovn-metrics" Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.794559 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="kube-rbac-proxy-node" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794565 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="kube-rbac-proxy-node" Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.794578 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="kubecfg-setup" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794584 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="kubecfg-setup" Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.794591 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovnkube-controller" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794598 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovnkube-controller" Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.794608 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="nbdb" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794614 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="nbdb" Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.794625 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="sbdb" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794630 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="sbdb" Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.794638 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovn-acl-logging" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794644 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovn-acl-logging" Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.794649 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovnkube-controller" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794655 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovnkube-controller" Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.794663 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovnkube-controller" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794668 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovnkube-controller" Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.794676 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="northd" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794682 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="northd" Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.794690 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7acbd9d5-f113-4fdc-8ee8-02a2df5d840e" containerName="oc" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794695 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acbd9d5-f113-4fdc-8ee8-02a2df5d840e" containerName="oc" Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.794703 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovn-controller" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794708 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovn-controller" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794786 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovn-acl-logging" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794797 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="kube-rbac-proxy-ovn-metrics" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794805 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="northd" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794813 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovnkube-controller" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794819 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovnkube-controller" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794826 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovnkube-controller" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794832 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovnkube-controller" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794841 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovn-controller" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794850 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="kube-rbac-proxy-node" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794855 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="nbdb" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794861 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="sbdb" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794867 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="7acbd9d5-f113-4fdc-8ee8-02a2df5d840e" containerName="oc" Feb 27 18:59:00 crc kubenswrapper[4981]: E0227 18:59:00.794949 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovnkube-controller" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.794956 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" containerName="ovnkube-controller" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.796485 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879140 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-systemd-units\") pod \"0918866b-8c49-4332-bb4d-bea02b35f047\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879182 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-etc-openvswitch\") pod \"0918866b-8c49-4332-bb4d-bea02b35f047\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879214 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-run-ovn\") pod \"0918866b-8c49-4332-bb4d-bea02b35f047\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879244 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0918866b-8c49-4332-bb4d-bea02b35f047-env-overrides\") pod \"0918866b-8c49-4332-bb4d-bea02b35f047\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879248 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "0918866b-8c49-4332-bb4d-bea02b35f047" (UID: "0918866b-8c49-4332-bb4d-bea02b35f047"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879270 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-run-ovn-kubernetes\") pod \"0918866b-8c49-4332-bb4d-bea02b35f047\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879295 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-cni-netd\") pod \"0918866b-8c49-4332-bb4d-bea02b35f047\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879285 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "0918866b-8c49-4332-bb4d-bea02b35f047" (UID: "0918866b-8c49-4332-bb4d-bea02b35f047"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879323 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0918866b-8c49-4332-bb4d-bea02b35f047-ovn-node-metrics-cert\") pod \"0918866b-8c49-4332-bb4d-bea02b35f047\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879339 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-run-netns\") pod \"0918866b-8c49-4332-bb4d-bea02b35f047\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879345 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "0918866b-8c49-4332-bb4d-bea02b35f047" (UID: "0918866b-8c49-4332-bb4d-bea02b35f047"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879339 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "0918866b-8c49-4332-bb4d-bea02b35f047" (UID: "0918866b-8c49-4332-bb4d-bea02b35f047"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879359 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "0918866b-8c49-4332-bb4d-bea02b35f047" (UID: "0918866b-8c49-4332-bb4d-bea02b35f047"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879372 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "0918866b-8c49-4332-bb4d-bea02b35f047" (UID: "0918866b-8c49-4332-bb4d-bea02b35f047"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879361 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0918866b-8c49-4332-bb4d-bea02b35f047-ovnkube-script-lib\") pod \"0918866b-8c49-4332-bb4d-bea02b35f047\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879436 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-var-lib-openvswitch\") pod \"0918866b-8c49-4332-bb4d-bea02b35f047\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879460 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-slash\") pod \"0918866b-8c49-4332-bb4d-bea02b35f047\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879482 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "0918866b-8c49-4332-bb4d-bea02b35f047" (UID: "0918866b-8c49-4332-bb4d-bea02b35f047"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879494 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-kubelet\") pod \"0918866b-8c49-4332-bb4d-bea02b35f047\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879513 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-log-socket\") pod \"0918866b-8c49-4332-bb4d-bea02b35f047\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879517 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-slash" (OuterVolumeSpecName: "host-slash") pod "0918866b-8c49-4332-bb4d-bea02b35f047" (UID: "0918866b-8c49-4332-bb4d-bea02b35f047"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879529 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "0918866b-8c49-4332-bb4d-bea02b35f047" (UID: "0918866b-8c49-4332-bb4d-bea02b35f047"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879538 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-run-systemd\") pod \"0918866b-8c49-4332-bb4d-bea02b35f047\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879549 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-log-socket" (OuterVolumeSpecName: "log-socket") pod "0918866b-8c49-4332-bb4d-bea02b35f047" (UID: "0918866b-8c49-4332-bb4d-bea02b35f047"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879571 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-run-openvswitch\") pod \"0918866b-8c49-4332-bb4d-bea02b35f047\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879598 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm8jq\" (UniqueName: \"kubernetes.io/projected/0918866b-8c49-4332-bb4d-bea02b35f047-kube-api-access-lm8jq\") pod \"0918866b-8c49-4332-bb4d-bea02b35f047\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879646 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-var-lib-cni-networks-ovn-kubernetes\") pod \"0918866b-8c49-4332-bb4d-bea02b35f047\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879681 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0918866b-8c49-4332-bb4d-bea02b35f047-ovnkube-config\") pod \"0918866b-8c49-4332-bb4d-bea02b35f047\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879691 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0918866b-8c49-4332-bb4d-bea02b35f047-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "0918866b-8c49-4332-bb4d-bea02b35f047" (UID: "0918866b-8c49-4332-bb4d-bea02b35f047"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879700 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-node-log\") pod \"0918866b-8c49-4332-bb4d-bea02b35f047\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879727 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-cni-bin\") pod \"0918866b-8c49-4332-bb4d-bea02b35f047\" (UID: \"0918866b-8c49-4332-bb4d-bea02b35f047\") " Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879939 4981 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0918866b-8c49-4332-bb4d-bea02b35f047-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879955 4981 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879966 4981 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879978 4981 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879990 4981 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.880001 4981 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-slash\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.880014 4981 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879701 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0918866b-8c49-4332-bb4d-bea02b35f047-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "0918866b-8c49-4332-bb4d-bea02b35f047" (UID: "0918866b-8c49-4332-bb4d-bea02b35f047"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.880025 4981 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-log-socket\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.880038 4981 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.880067 4981 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.880080 4981 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879721 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "0918866b-8c49-4332-bb4d-bea02b35f047" (UID: "0918866b-8c49-4332-bb4d-bea02b35f047"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879975 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0918866b-8c49-4332-bb4d-bea02b35f047-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "0918866b-8c49-4332-bb4d-bea02b35f047" (UID: "0918866b-8c49-4332-bb4d-bea02b35f047"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.879992 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "0918866b-8c49-4332-bb4d-bea02b35f047" (UID: "0918866b-8c49-4332-bb4d-bea02b35f047"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.880013 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-node-log" (OuterVolumeSpecName: "node-log") pod "0918866b-8c49-4332-bb4d-bea02b35f047" (UID: "0918866b-8c49-4332-bb4d-bea02b35f047"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.880041 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "0918866b-8c49-4332-bb4d-bea02b35f047" (UID: "0918866b-8c49-4332-bb4d-bea02b35f047"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.886722 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0918866b-8c49-4332-bb4d-bea02b35f047-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "0918866b-8c49-4332-bb4d-bea02b35f047" (UID: "0918866b-8c49-4332-bb4d-bea02b35f047"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.886811 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0918866b-8c49-4332-bb4d-bea02b35f047-kube-api-access-lm8jq" (OuterVolumeSpecName: "kube-api-access-lm8jq") pod "0918866b-8c49-4332-bb4d-bea02b35f047" (UID: "0918866b-8c49-4332-bb4d-bea02b35f047"). InnerVolumeSpecName "kube-api-access-lm8jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.896968 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "0918866b-8c49-4332-bb4d-bea02b35f047" (UID: "0918866b-8c49-4332-bb4d-bea02b35f047"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.981513 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-node-log\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.981554 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-cni-bin\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.981573 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-run-netns\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.981596 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-run-systemd\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.981654 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ef67a254-846a-4518-b825-6fab9803f0d9-ovnkube-script-lib\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.981682 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef67a254-846a-4518-b825-6fab9803f0d9-env-overrides\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.981704 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ef67a254-846a-4518-b825-6fab9803f0d9-ovn-node-metrics-cert\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.981727 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-var-lib-openvswitch\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.981746 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-run-openvswitch\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.981760 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-run-ovn-kubernetes\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.981779 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-cni-netd\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.981864 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.981907 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-kubelet\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.981927 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-slash\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.981977 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-systemd-units\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.981998 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-etc-openvswitch\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.982015 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-run-ovn\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.982037 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9vcn\" (UniqueName: \"kubernetes.io/projected/ef67a254-846a-4518-b825-6fab9803f0d9-kube-api-access-n9vcn\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.982077 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-log-socket\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.982106 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ef67a254-846a-4518-b825-6fab9803f0d9-ovnkube-config\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.982163 4981 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.982175 4981 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0918866b-8c49-4332-bb4d-bea02b35f047-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.982186 4981 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0918866b-8c49-4332-bb4d-bea02b35f047-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.982195 4981 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.982203 4981 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.982212 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm8jq\" (UniqueName: \"kubernetes.io/projected/0918866b-8c49-4332-bb4d-bea02b35f047-kube-api-access-lm8jq\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.982221 4981 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.982230 4981 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0918866b-8c49-4332-bb4d-bea02b35f047-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:00 crc kubenswrapper[4981]: I0227 18:59:00.982239 4981 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0918866b-8c49-4332-bb4d-bea02b35f047-node-log\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.083112 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-systemd-units\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.083219 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-etc-openvswitch\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.083239 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-run-ovn\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.083257 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9vcn\" (UniqueName: \"kubernetes.io/projected/ef67a254-846a-4518-b825-6fab9803f0d9-kube-api-access-n9vcn\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.083262 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-systemd-units\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.083329 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-run-ovn\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.083339 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-etc-openvswitch\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.083343 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-log-socket\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.083277 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-log-socket\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.083428 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ef67a254-846a-4518-b825-6fab9803f0d9-ovnkube-config\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.083471 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-node-log\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.083498 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-cni-bin\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.083531 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-run-netns\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.083559 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-node-log\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.083566 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-run-systemd\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.083593 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-run-systemd\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.083630 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-cni-bin\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.083647 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ef67a254-846a-4518-b825-6fab9803f0d9-ovnkube-script-lib\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.083661 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-run-netns\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.083687 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef67a254-846a-4518-b825-6fab9803f0d9-env-overrides\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.084042 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlwn_0918866b-8c49-4332-bb4d-bea02b35f047/ovn-acl-logging/0.log" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.084124 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ef67a254-846a-4518-b825-6fab9803f0d9-ovnkube-config\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.084187 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ef67a254-846a-4518-b825-6fab9803f0d9-ovn-node-metrics-cert\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.084257 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-var-lib-openvswitch\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.084280 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef67a254-846a-4518-b825-6fab9803f0d9-env-overrides\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.084297 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-run-openvswitch\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.084317 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-var-lib-openvswitch\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.084326 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-run-ovn-kubernetes\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.084349 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-run-openvswitch\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.084363 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-cni-netd\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.084376 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-run-ovn-kubernetes\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.084406 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.084459 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-kubelet\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.084487 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-cni-netd\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.084496 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-kubelet\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.084467 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.084521 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-slash\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.084502 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ef67a254-846a-4518-b825-6fab9803f0d9-host-slash\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.084801 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ef67a254-846a-4518-b825-6fab9803f0d9-ovnkube-script-lib\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.085681 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6rlwn_0918866b-8c49-4332-bb4d-bea02b35f047/ovn-controller/0.log" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.086162 4981 generic.go:334] "Generic (PLEG): container finished" podID="0918866b-8c49-4332-bb4d-bea02b35f047" containerID="bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e" exitCode=0 Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.086200 4981 generic.go:334] "Generic (PLEG): container finished" podID="0918866b-8c49-4332-bb4d-bea02b35f047" containerID="bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39" exitCode=0 Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.086203 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerDied","Data":"bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e"} Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.086241 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerDied","Data":"bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39"} Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.086253 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerDied","Data":"082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501"} Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.086270 4981 scope.go:117] "RemoveContainer" containerID="3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.086216 4981 generic.go:334] "Generic (PLEG): container finished" podID="0918866b-8c49-4332-bb4d-bea02b35f047" containerID="082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501" exitCode=0 Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.086326 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.086382 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rlwn" event={"ID":"0918866b-8c49-4332-bb4d-bea02b35f047","Type":"ContainerDied","Data":"f138b4f7e022848b500a07d4646746d1cb35f8efe4f7204646c2aeb809d39a00"} Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.093112 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ef67a254-846a-4518-b825-6fab9803f0d9-ovn-node-metrics-cert\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.102278 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-992xv_2f03f89e-d428-4246-a710-23c47810b60e/kube-multus/1.log" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.102390 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-992xv" event={"ID":"2f03f89e-d428-4246-a710-23c47810b60e","Type":"ContainerStarted","Data":"0c3f1816ce0d9bb6ec841205c8c2ccebd3febb3abac083a0954369ba03ecf243"} Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.105351 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9vcn\" (UniqueName: \"kubernetes.io/projected/ef67a254-846a-4518-b825-6fab9803f0d9-kube-api-access-n9vcn\") pod \"ovnkube-node-b6465\" (UID: \"ef67a254-846a-4518-b825-6fab9803f0d9\") " pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.109334 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.188745 4981 scope.go:117] "RemoveContainer" containerID="bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.204041 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6rlwn"] Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.211946 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6rlwn"] Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.222991 4981 scope.go:117] "RemoveContainer" containerID="bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.242486 4981 scope.go:117] "RemoveContainer" containerID="082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.258690 4981 scope.go:117] "RemoveContainer" containerID="e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.270450 4981 scope.go:117] "RemoveContainer" containerID="02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.281520 4981 scope.go:117] "RemoveContainer" containerID="0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.294158 4981 scope.go:117] "RemoveContainer" containerID="5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.309847 4981 scope.go:117] "RemoveContainer" containerID="17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.327102 4981 scope.go:117] "RemoveContainer" containerID="3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae" Feb 27 18:59:01 crc kubenswrapper[4981]: E0227 18:59:01.327789 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae\": container with ID starting with 3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae not found: ID does not exist" containerID="3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.327893 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae"} err="failed to get container status \"3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae\": rpc error: code = NotFound desc = could not find container \"3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae\": container with ID starting with 3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.327960 4981 scope.go:117] "RemoveContainer" containerID="bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e" Feb 27 18:59:01 crc kubenswrapper[4981]: E0227 18:59:01.328759 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\": container with ID starting with bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e not found: ID does not exist" containerID="bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.328796 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e"} err="failed to get container status \"bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\": rpc error: code = NotFound desc = could not find container \"bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\": container with ID starting with bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.328822 4981 scope.go:117] "RemoveContainer" containerID="bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39" Feb 27 18:59:01 crc kubenswrapper[4981]: E0227 18:59:01.329113 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\": container with ID starting with bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39 not found: ID does not exist" containerID="bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.329143 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39"} err="failed to get container status \"bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\": rpc error: code = NotFound desc = could not find container \"bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\": container with ID starting with bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39 not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.329175 4981 scope.go:117] "RemoveContainer" containerID="082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501" Feb 27 18:59:01 crc kubenswrapper[4981]: E0227 18:59:01.329519 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\": container with ID starting with 082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501 not found: ID does not exist" containerID="082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.329592 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501"} err="failed to get container status \"082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\": rpc error: code = NotFound desc = could not find container \"082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\": container with ID starting with 082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501 not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.329638 4981 scope.go:117] "RemoveContainer" containerID="e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444" Feb 27 18:59:01 crc kubenswrapper[4981]: E0227 18:59:01.330019 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\": container with ID starting with e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444 not found: ID does not exist" containerID="e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.330087 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444"} err="failed to get container status \"e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\": rpc error: code = NotFound desc = could not find container \"e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\": container with ID starting with e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444 not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.330116 4981 scope.go:117] "RemoveContainer" containerID="02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68" Feb 27 18:59:01 crc kubenswrapper[4981]: E0227 18:59:01.330414 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\": container with ID starting with 02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68 not found: ID does not exist" containerID="02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.330479 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68"} err="failed to get container status \"02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\": rpc error: code = NotFound desc = could not find container \"02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\": container with ID starting with 02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68 not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.330524 4981 scope.go:117] "RemoveContainer" containerID="0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf" Feb 27 18:59:01 crc kubenswrapper[4981]: E0227 18:59:01.330839 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\": container with ID starting with 0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf not found: ID does not exist" containerID="0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.330882 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf"} err="failed to get container status \"0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\": rpc error: code = NotFound desc = could not find container \"0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\": container with ID starting with 0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.330908 4981 scope.go:117] "RemoveContainer" containerID="5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61" Feb 27 18:59:01 crc kubenswrapper[4981]: E0227 18:59:01.331186 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\": container with ID starting with 5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61 not found: ID does not exist" containerID="5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.331231 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61"} err="failed to get container status \"5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\": rpc error: code = NotFound desc = could not find container \"5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\": container with ID starting with 5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61 not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.331263 4981 scope.go:117] "RemoveContainer" containerID="17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74" Feb 27 18:59:01 crc kubenswrapper[4981]: E0227 18:59:01.331550 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\": container with ID starting with 17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74 not found: ID does not exist" containerID="17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.331591 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74"} err="failed to get container status \"17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\": rpc error: code = NotFound desc = could not find container \"17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\": container with ID starting with 17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74 not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.331619 4981 scope.go:117] "RemoveContainer" containerID="3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.331884 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae"} err="failed to get container status \"3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae\": rpc error: code = NotFound desc = could not find container \"3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae\": container with ID starting with 3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.331921 4981 scope.go:117] "RemoveContainer" containerID="bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.332201 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e"} err="failed to get container status \"bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\": rpc error: code = NotFound desc = could not find container \"bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\": container with ID starting with bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.332237 4981 scope.go:117] "RemoveContainer" containerID="bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.332504 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39"} err="failed to get container status \"bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\": rpc error: code = NotFound desc = could not find container \"bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\": container with ID starting with bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39 not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.332542 4981 scope.go:117] "RemoveContainer" containerID="082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.332808 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501"} err="failed to get container status \"082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\": rpc error: code = NotFound desc = could not find container \"082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\": container with ID starting with 082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501 not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.332844 4981 scope.go:117] "RemoveContainer" containerID="e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.333521 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444"} err="failed to get container status \"e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\": rpc error: code = NotFound desc = could not find container \"e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\": container with ID starting with e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444 not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.333560 4981 scope.go:117] "RemoveContainer" containerID="02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.333896 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68"} err="failed to get container status \"02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\": rpc error: code = NotFound desc = could not find container \"02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\": container with ID starting with 02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68 not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.333932 4981 scope.go:117] "RemoveContainer" containerID="0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.334221 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf"} err="failed to get container status \"0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\": rpc error: code = NotFound desc = could not find container \"0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\": container with ID starting with 0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.334276 4981 scope.go:117] "RemoveContainer" containerID="5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.334584 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61"} err="failed to get container status \"5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\": rpc error: code = NotFound desc = could not find container \"5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\": container with ID starting with 5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61 not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.334622 4981 scope.go:117] "RemoveContainer" containerID="17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.334928 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74"} err="failed to get container status \"17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\": rpc error: code = NotFound desc = could not find container \"17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\": container with ID starting with 17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74 not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.334980 4981 scope.go:117] "RemoveContainer" containerID="3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.335501 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae"} err="failed to get container status \"3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae\": rpc error: code = NotFound desc = could not find container \"3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae\": container with ID starting with 3b6dbbcd19a01f9599d64a66cadf7ad97dd7ed2ae063c13bff3b1d20efb9faae not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.335540 4981 scope.go:117] "RemoveContainer" containerID="bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.335865 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e"} err="failed to get container status \"bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\": rpc error: code = NotFound desc = could not find container \"bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e\": container with ID starting with bbcdcde80ca344a6073c4f4be2c1858954e4ce7aa41f24bd26c05a54e1167c1e not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.335925 4981 scope.go:117] "RemoveContainer" containerID="bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.336225 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39"} err="failed to get container status \"bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\": rpc error: code = NotFound desc = could not find container \"bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39\": container with ID starting with bc4ff23f8c2b0d73ad9ad9967ae915f4ca13c4bba568d45c7d95a51f2cf3eb39 not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.336262 4981 scope.go:117] "RemoveContainer" containerID="082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.336563 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501"} err="failed to get container status \"082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\": rpc error: code = NotFound desc = could not find container \"082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501\": container with ID starting with 082fcf7992b915e2ca0012f515932a7dad743f52b412950edf14685a8de41501 not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.336609 4981 scope.go:117] "RemoveContainer" containerID="e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.336846 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444"} err="failed to get container status \"e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\": rpc error: code = NotFound desc = could not find container \"e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444\": container with ID starting with e6afba311045f97935ddb8b4f2f2f344f53eb60426dca4c4ff35c2c545a4c444 not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.336888 4981 scope.go:117] "RemoveContainer" containerID="02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.337185 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68"} err="failed to get container status \"02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\": rpc error: code = NotFound desc = could not find container \"02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68\": container with ID starting with 02e42045ef8564e1c9fd13023090cd07ae595b3221cfede32d2c73cf69f38e68 not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.337217 4981 scope.go:117] "RemoveContainer" containerID="0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.337449 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf"} err="failed to get container status \"0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\": rpc error: code = NotFound desc = could not find container \"0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf\": container with ID starting with 0288bef881b55ede0066cb964d94d1615346209a42040a976cd6225cf80eb5bf not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.337481 4981 scope.go:117] "RemoveContainer" containerID="5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.337700 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61"} err="failed to get container status \"5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\": rpc error: code = NotFound desc = could not find container \"5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61\": container with ID starting with 5dd3c1b19348e52d1c789e340e2237d638adfa026d1cc1dd84c03dbe94aeba61 not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.337730 4981 scope.go:117] "RemoveContainer" containerID="17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.337947 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74"} err="failed to get container status \"17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\": rpc error: code = NotFound desc = could not find container \"17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74\": container with ID starting with 17c53909884817654fc7c18917afe400bf00317fd5eda99b94d93f650156bb74 not found: ID does not exist" Feb 27 18:59:01 crc kubenswrapper[4981]: I0227 18:59:01.638195 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0918866b-8c49-4332-bb4d-bea02b35f047" path="/var/lib/kubelet/pods/0918866b-8c49-4332-bb4d-bea02b35f047/volumes" Feb 27 18:59:02 crc kubenswrapper[4981]: I0227 18:59:02.112277 4981 generic.go:334] "Generic (PLEG): container finished" podID="ef67a254-846a-4518-b825-6fab9803f0d9" containerID="5be893aa113263cfe740be9317f754ebcfe5a1674b39bf32e06edd258ffe7b9e" exitCode=0 Feb 27 18:59:02 crc kubenswrapper[4981]: I0227 18:59:02.112382 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6465" event={"ID":"ef67a254-846a-4518-b825-6fab9803f0d9","Type":"ContainerDied","Data":"5be893aa113263cfe740be9317f754ebcfe5a1674b39bf32e06edd258ffe7b9e"} Feb 27 18:59:02 crc kubenswrapper[4981]: I0227 18:59:02.112732 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6465" event={"ID":"ef67a254-846a-4518-b825-6fab9803f0d9","Type":"ContainerStarted","Data":"5d5515aa87f624d8e87279958082463059c54fc5bf4422703f723713b0fec5df"} Feb 27 18:59:02 crc kubenswrapper[4981]: I0227 18:59:02.590829 4981 scope.go:117] "RemoveContainer" containerID="8b47776555f382e2fa8c441c3f4f3efc21ebf796059f237851b8eb398c6d80ee" Feb 27 18:59:03 crc kubenswrapper[4981]: I0227 18:59:03.125981 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6465" event={"ID":"ef67a254-846a-4518-b825-6fab9803f0d9","Type":"ContainerStarted","Data":"559fe5e08899565d20c6d4b232933f1b5aef23b08112ee70bba125309d540aa7"} Feb 27 18:59:03 crc kubenswrapper[4981]: I0227 18:59:03.126367 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6465" event={"ID":"ef67a254-846a-4518-b825-6fab9803f0d9","Type":"ContainerStarted","Data":"897e9c54e8e25a4dd383c8e2a098fb4b062a751d5ecca776e2fc19b17efc460c"} Feb 27 18:59:04 crc kubenswrapper[4981]: I0227 18:59:04.135979 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6465" event={"ID":"ef67a254-846a-4518-b825-6fab9803f0d9","Type":"ContainerStarted","Data":"4e594617ffea7dd6dbd7534410f0d5c611db14ed5f26db87492c3f3bb213ea45"} Feb 27 18:59:04 crc kubenswrapper[4981]: I0227 18:59:04.136666 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6465" event={"ID":"ef67a254-846a-4518-b825-6fab9803f0d9","Type":"ContainerStarted","Data":"42e7022afd543317f80ed24db6fd2f7856768379f99d5ad5a69ef3c71d8cdaae"} Feb 27 18:59:05 crc kubenswrapper[4981]: I0227 18:59:05.145742 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6465" event={"ID":"ef67a254-846a-4518-b825-6fab9803f0d9","Type":"ContainerStarted","Data":"a7f169c93d9bea9005c589f4c3c603585ce8513c59a9b1972aa6fee81f66030f"} Feb 27 18:59:05 crc kubenswrapper[4981]: I0227 18:59:05.146027 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6465" event={"ID":"ef67a254-846a-4518-b825-6fab9803f0d9","Type":"ContainerStarted","Data":"e56e36b2fa305cef521b8a6c3fa0283820931c6bdcfcca3b6b7cddb5c4214b0f"} Feb 27 18:59:08 crc kubenswrapper[4981]: I0227 18:59:08.874523 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-wcjkj"] Feb 27 18:59:08 crc kubenswrapper[4981]: I0227 18:59:08.876290 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:08 crc kubenswrapper[4981]: I0227 18:59:08.878688 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 27 18:59:08 crc kubenswrapper[4981]: I0227 18:59:08.879210 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 27 18:59:08 crc kubenswrapper[4981]: I0227 18:59:08.880720 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 27 18:59:08 crc kubenswrapper[4981]: I0227 18:59:08.881478 4981 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-r6c7h" Feb 27 18:59:08 crc kubenswrapper[4981]: I0227 18:59:08.987115 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5861d4d6-b86c-4181-874e-0e38c28e16cd-crc-storage\") pod \"crc-storage-crc-wcjkj\" (UID: \"5861d4d6-b86c-4181-874e-0e38c28e16cd\") " pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:08 crc kubenswrapper[4981]: I0227 18:59:08.987957 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n7mf\" (UniqueName: \"kubernetes.io/projected/5861d4d6-b86c-4181-874e-0e38c28e16cd-kube-api-access-8n7mf\") pod \"crc-storage-crc-wcjkj\" (UID: \"5861d4d6-b86c-4181-874e-0e38c28e16cd\") " pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:08 crc kubenswrapper[4981]: I0227 18:59:08.988037 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5861d4d6-b86c-4181-874e-0e38c28e16cd-node-mnt\") pod \"crc-storage-crc-wcjkj\" (UID: \"5861d4d6-b86c-4181-874e-0e38c28e16cd\") " pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:09 crc kubenswrapper[4981]: I0227 18:59:09.089721 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5861d4d6-b86c-4181-874e-0e38c28e16cd-crc-storage\") pod \"crc-storage-crc-wcjkj\" (UID: \"5861d4d6-b86c-4181-874e-0e38c28e16cd\") " pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:09 crc kubenswrapper[4981]: I0227 18:59:09.089940 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n7mf\" (UniqueName: \"kubernetes.io/projected/5861d4d6-b86c-4181-874e-0e38c28e16cd-kube-api-access-8n7mf\") pod \"crc-storage-crc-wcjkj\" (UID: \"5861d4d6-b86c-4181-874e-0e38c28e16cd\") " pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:09 crc kubenswrapper[4981]: I0227 18:59:09.089982 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5861d4d6-b86c-4181-874e-0e38c28e16cd-node-mnt\") pod \"crc-storage-crc-wcjkj\" (UID: \"5861d4d6-b86c-4181-874e-0e38c28e16cd\") " pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:09 crc kubenswrapper[4981]: I0227 18:59:09.090210 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5861d4d6-b86c-4181-874e-0e38c28e16cd-node-mnt\") pod \"crc-storage-crc-wcjkj\" (UID: \"5861d4d6-b86c-4181-874e-0e38c28e16cd\") " pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:09 crc kubenswrapper[4981]: I0227 18:59:09.090787 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5861d4d6-b86c-4181-874e-0e38c28e16cd-crc-storage\") pod \"crc-storage-crc-wcjkj\" (UID: \"5861d4d6-b86c-4181-874e-0e38c28e16cd\") " pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:09 crc kubenswrapper[4981]: I0227 18:59:09.113874 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n7mf\" (UniqueName: \"kubernetes.io/projected/5861d4d6-b86c-4181-874e-0e38c28e16cd-kube-api-access-8n7mf\") pod \"crc-storage-crc-wcjkj\" (UID: \"5861d4d6-b86c-4181-874e-0e38c28e16cd\") " pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:09 crc kubenswrapper[4981]: I0227 18:59:09.182382 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6465" event={"ID":"ef67a254-846a-4518-b825-6fab9803f0d9","Type":"ContainerStarted","Data":"2e737a5d7b3d7c299b14ae80d4182731070e81b60516e4ab7a054d8f9f523c1e"} Feb 27 18:59:09 crc kubenswrapper[4981]: I0227 18:59:09.201986 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:09 crc kubenswrapper[4981]: E0227 18:59:09.240116 4981 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-wcjkj_crc-storage_5861d4d6-b86c-4181-874e-0e38c28e16cd_0(1aaefef3eada4d9db758a04473dda0bfee97ff4fd18ac418aecaf9b583bc715c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 18:59:09 crc kubenswrapper[4981]: E0227 18:59:09.240235 4981 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-wcjkj_crc-storage_5861d4d6-b86c-4181-874e-0e38c28e16cd_0(1aaefef3eada4d9db758a04473dda0bfee97ff4fd18ac418aecaf9b583bc715c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:09 crc kubenswrapper[4981]: E0227 18:59:09.240285 4981 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-wcjkj_crc-storage_5861d4d6-b86c-4181-874e-0e38c28e16cd_0(1aaefef3eada4d9db758a04473dda0bfee97ff4fd18ac418aecaf9b583bc715c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:09 crc kubenswrapper[4981]: E0227 18:59:09.240388 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-wcjkj_crc-storage(5861d4d6-b86c-4181-874e-0e38c28e16cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-wcjkj_crc-storage(5861d4d6-b86c-4181-874e-0e38c28e16cd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-wcjkj_crc-storage_5861d4d6-b86c-4181-874e-0e38c28e16cd_0(1aaefef3eada4d9db758a04473dda0bfee97ff4fd18ac418aecaf9b583bc715c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-wcjkj" podUID="5861d4d6-b86c-4181-874e-0e38c28e16cd" Feb 27 18:59:10 crc kubenswrapper[4981]: I0227 18:59:10.190737 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-b6465" event={"ID":"ef67a254-846a-4518-b825-6fab9803f0d9","Type":"ContainerStarted","Data":"79262977dfbc71f1debe187d8ef4ad8e2470ce6e26b18d14c23a3b26effe30f0"} Feb 27 18:59:10 crc kubenswrapper[4981]: I0227 18:59:10.192070 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:10 crc kubenswrapper[4981]: I0227 18:59:10.192247 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:10 crc kubenswrapper[4981]: I0227 18:59:10.192291 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:10 crc kubenswrapper[4981]: I0227 18:59:10.228131 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:10 crc kubenswrapper[4981]: I0227 18:59:10.231695 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-b6465" podStartSLOduration=10.231676782 podStartE2EDuration="10.231676782s" podCreationTimestamp="2026-02-27 18:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:59:10.227064582 +0000 UTC m=+849.705845762" watchObservedRunningTime="2026-02-27 18:59:10.231676782 +0000 UTC m=+849.710457942" Feb 27 18:59:10 crc kubenswrapper[4981]: I0227 18:59:10.234975 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:10 crc kubenswrapper[4981]: I0227 18:59:10.332089 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-wcjkj"] Feb 27 18:59:10 crc kubenswrapper[4981]: I0227 18:59:10.332230 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:10 crc kubenswrapper[4981]: I0227 18:59:10.332856 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:10 crc kubenswrapper[4981]: E0227 18:59:10.369806 4981 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-wcjkj_crc-storage_5861d4d6-b86c-4181-874e-0e38c28e16cd_0(154b9a5094fc93b8957c6657695ef9534ff2bc49d4dc512fd3fd91da027cd297): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 18:59:10 crc kubenswrapper[4981]: E0227 18:59:10.369896 4981 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-wcjkj_crc-storage_5861d4d6-b86c-4181-874e-0e38c28e16cd_0(154b9a5094fc93b8957c6657695ef9534ff2bc49d4dc512fd3fd91da027cd297): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:10 crc kubenswrapper[4981]: E0227 18:59:10.369931 4981 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-wcjkj_crc-storage_5861d4d6-b86c-4181-874e-0e38c28e16cd_0(154b9a5094fc93b8957c6657695ef9534ff2bc49d4dc512fd3fd91da027cd297): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:10 crc kubenswrapper[4981]: E0227 18:59:10.370008 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-wcjkj_crc-storage(5861d4d6-b86c-4181-874e-0e38c28e16cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-wcjkj_crc-storage(5861d4d6-b86c-4181-874e-0e38c28e16cd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-wcjkj_crc-storage_5861d4d6-b86c-4181-874e-0e38c28e16cd_0(154b9a5094fc93b8957c6657695ef9534ff2bc49d4dc512fd3fd91da027cd297): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-wcjkj" podUID="5861d4d6-b86c-4181-874e-0e38c28e16cd" Feb 27 18:59:24 crc kubenswrapper[4981]: I0227 18:59:24.628443 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:24 crc kubenswrapper[4981]: I0227 18:59:24.629478 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:24 crc kubenswrapper[4981]: I0227 18:59:24.880487 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-wcjkj"] Feb 27 18:59:24 crc kubenswrapper[4981]: I0227 18:59:24.892883 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 18:59:25 crc kubenswrapper[4981]: I0227 18:59:25.301023 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-wcjkj" event={"ID":"5861d4d6-b86c-4181-874e-0e38c28e16cd","Type":"ContainerStarted","Data":"ba34474dff27a0fdef10f9cdeeadb14e38ea47c8bc2f56ca9165d9931e9822ad"} Feb 27 18:59:27 crc kubenswrapper[4981]: I0227 18:59:27.316189 4981 generic.go:334] "Generic (PLEG): container finished" podID="5861d4d6-b86c-4181-874e-0e38c28e16cd" containerID="5a17afb8b8af46d54ebe85d6331a975c515da2ab56ecf3782487bcb4b021d3cb" exitCode=0 Feb 27 18:59:27 crc kubenswrapper[4981]: I0227 18:59:27.316257 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-wcjkj" event={"ID":"5861d4d6-b86c-4181-874e-0e38c28e16cd","Type":"ContainerDied","Data":"5a17afb8b8af46d54ebe85d6331a975c515da2ab56ecf3782487bcb4b021d3cb"} Feb 27 18:59:28 crc kubenswrapper[4981]: I0227 18:59:28.620939 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:28 crc kubenswrapper[4981]: I0227 18:59:28.795393 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5861d4d6-b86c-4181-874e-0e38c28e16cd-crc-storage\") pod \"5861d4d6-b86c-4181-874e-0e38c28e16cd\" (UID: \"5861d4d6-b86c-4181-874e-0e38c28e16cd\") " Feb 27 18:59:28 crc kubenswrapper[4981]: I0227 18:59:28.795536 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5861d4d6-b86c-4181-874e-0e38c28e16cd-node-mnt\") pod \"5861d4d6-b86c-4181-874e-0e38c28e16cd\" (UID: \"5861d4d6-b86c-4181-874e-0e38c28e16cd\") " Feb 27 18:59:28 crc kubenswrapper[4981]: I0227 18:59:28.795603 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n7mf\" (UniqueName: \"kubernetes.io/projected/5861d4d6-b86c-4181-874e-0e38c28e16cd-kube-api-access-8n7mf\") pod \"5861d4d6-b86c-4181-874e-0e38c28e16cd\" (UID: \"5861d4d6-b86c-4181-874e-0e38c28e16cd\") " Feb 27 18:59:28 crc kubenswrapper[4981]: I0227 18:59:28.795668 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5861d4d6-b86c-4181-874e-0e38c28e16cd-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "5861d4d6-b86c-4181-874e-0e38c28e16cd" (UID: "5861d4d6-b86c-4181-874e-0e38c28e16cd"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:59:28 crc kubenswrapper[4981]: I0227 18:59:28.795940 4981 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5861d4d6-b86c-4181-874e-0e38c28e16cd-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:28 crc kubenswrapper[4981]: I0227 18:59:28.804443 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5861d4d6-b86c-4181-874e-0e38c28e16cd-kube-api-access-8n7mf" (OuterVolumeSpecName: "kube-api-access-8n7mf") pod "5861d4d6-b86c-4181-874e-0e38c28e16cd" (UID: "5861d4d6-b86c-4181-874e-0e38c28e16cd"). InnerVolumeSpecName "kube-api-access-8n7mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:59:28 crc kubenswrapper[4981]: I0227 18:59:28.824587 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5861d4d6-b86c-4181-874e-0e38c28e16cd-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "5861d4d6-b86c-4181-874e-0e38c28e16cd" (UID: "5861d4d6-b86c-4181-874e-0e38c28e16cd"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:59:28 crc kubenswrapper[4981]: I0227 18:59:28.896725 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n7mf\" (UniqueName: \"kubernetes.io/projected/5861d4d6-b86c-4181-874e-0e38c28e16cd-kube-api-access-8n7mf\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:28 crc kubenswrapper[4981]: I0227 18:59:28.896782 4981 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5861d4d6-b86c-4181-874e-0e38c28e16cd-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:29 crc kubenswrapper[4981]: I0227 18:59:29.332219 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-wcjkj" event={"ID":"5861d4d6-b86c-4181-874e-0e38c28e16cd","Type":"ContainerDied","Data":"ba34474dff27a0fdef10f9cdeeadb14e38ea47c8bc2f56ca9165d9931e9822ad"} Feb 27 18:59:29 crc kubenswrapper[4981]: I0227 18:59:29.332278 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba34474dff27a0fdef10f9cdeeadb14e38ea47c8bc2f56ca9165d9931e9822ad" Feb 27 18:59:29 crc kubenswrapper[4981]: I0227 18:59:29.332299 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-wcjkj" Feb 27 18:59:31 crc kubenswrapper[4981]: I0227 18:59:31.145637 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-b6465" Feb 27 18:59:37 crc kubenswrapper[4981]: I0227 18:59:37.506804 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp"] Feb 27 18:59:37 crc kubenswrapper[4981]: E0227 18:59:37.507385 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5861d4d6-b86c-4181-874e-0e38c28e16cd" containerName="storage" Feb 27 18:59:37 crc kubenswrapper[4981]: I0227 18:59:37.507399 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5861d4d6-b86c-4181-874e-0e38c28e16cd" containerName="storage" Feb 27 18:59:37 crc kubenswrapper[4981]: I0227 18:59:37.507507 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="5861d4d6-b86c-4181-874e-0e38c28e16cd" containerName="storage" Feb 27 18:59:37 crc kubenswrapper[4981]: I0227 18:59:37.508253 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp" Feb 27 18:59:37 crc kubenswrapper[4981]: I0227 18:59:37.509804 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 27 18:59:37 crc kubenswrapper[4981]: I0227 18:59:37.517634 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp"] Feb 27 18:59:37 crc kubenswrapper[4981]: I0227 18:59:37.634918 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a16a4f3-0450-40f6-b7b9-26ce12441e3b-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp\" (UID: \"3a16a4f3-0450-40f6-b7b9-26ce12441e3b\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp" Feb 27 18:59:37 crc kubenswrapper[4981]: I0227 18:59:37.634991 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqg2j\" (UniqueName: \"kubernetes.io/projected/3a16a4f3-0450-40f6-b7b9-26ce12441e3b-kube-api-access-nqg2j\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp\" (UID: \"3a16a4f3-0450-40f6-b7b9-26ce12441e3b\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp" Feb 27 18:59:37 crc kubenswrapper[4981]: I0227 18:59:37.635017 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a16a4f3-0450-40f6-b7b9-26ce12441e3b-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp\" (UID: \"3a16a4f3-0450-40f6-b7b9-26ce12441e3b\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp" Feb 27 18:59:37 crc kubenswrapper[4981]: I0227 18:59:37.736924 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a16a4f3-0450-40f6-b7b9-26ce12441e3b-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp\" (UID: \"3a16a4f3-0450-40f6-b7b9-26ce12441e3b\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp" Feb 27 18:59:37 crc kubenswrapper[4981]: I0227 18:59:37.736994 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqg2j\" (UniqueName: \"kubernetes.io/projected/3a16a4f3-0450-40f6-b7b9-26ce12441e3b-kube-api-access-nqg2j\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp\" (UID: \"3a16a4f3-0450-40f6-b7b9-26ce12441e3b\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp" Feb 27 18:59:37 crc kubenswrapper[4981]: I0227 18:59:37.737021 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a16a4f3-0450-40f6-b7b9-26ce12441e3b-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp\" (UID: \"3a16a4f3-0450-40f6-b7b9-26ce12441e3b\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp" Feb 27 18:59:37 crc kubenswrapper[4981]: I0227 18:59:37.738343 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a16a4f3-0450-40f6-b7b9-26ce12441e3b-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp\" (UID: \"3a16a4f3-0450-40f6-b7b9-26ce12441e3b\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp" Feb 27 18:59:37 crc kubenswrapper[4981]: I0227 18:59:37.738396 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a16a4f3-0450-40f6-b7b9-26ce12441e3b-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp\" (UID: \"3a16a4f3-0450-40f6-b7b9-26ce12441e3b\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp" Feb 27 18:59:37 crc kubenswrapper[4981]: I0227 18:59:37.783027 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqg2j\" (UniqueName: \"kubernetes.io/projected/3a16a4f3-0450-40f6-b7b9-26ce12441e3b-kube-api-access-nqg2j\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp\" (UID: \"3a16a4f3-0450-40f6-b7b9-26ce12441e3b\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp" Feb 27 18:59:37 crc kubenswrapper[4981]: I0227 18:59:37.832251 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp" Feb 27 18:59:38 crc kubenswrapper[4981]: I0227 18:59:38.354370 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp"] Feb 27 18:59:38 crc kubenswrapper[4981]: I0227 18:59:38.392129 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp" event={"ID":"3a16a4f3-0450-40f6-b7b9-26ce12441e3b","Type":"ContainerStarted","Data":"ee1b603eb1d25822c6dcc39276db703e02dc3df8bcb9d2d332e72a03d7236bee"} Feb 27 18:59:39 crc kubenswrapper[4981]: I0227 18:59:39.403126 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp" event={"ID":"3a16a4f3-0450-40f6-b7b9-26ce12441e3b","Type":"ContainerStarted","Data":"165e9f1fff3c2ea43b1d284ef54decc0049893b0b6c6effa67a338d2e6df29e5"} Feb 27 18:59:39 crc kubenswrapper[4981]: I0227 18:59:39.878226 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jn27h"] Feb 27 18:59:39 crc kubenswrapper[4981]: I0227 18:59:39.879934 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jn27h" Feb 27 18:59:39 crc kubenswrapper[4981]: I0227 18:59:39.900672 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jn27h"] Feb 27 18:59:40 crc kubenswrapper[4981]: I0227 18:59:40.068593 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133ca76d-9978-4cca-825d-341faab093cd-catalog-content\") pod \"redhat-operators-jn27h\" (UID: \"133ca76d-9978-4cca-825d-341faab093cd\") " pod="openshift-marketplace/redhat-operators-jn27h" Feb 27 18:59:40 crc kubenswrapper[4981]: I0227 18:59:40.068665 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133ca76d-9978-4cca-825d-341faab093cd-utilities\") pod \"redhat-operators-jn27h\" (UID: \"133ca76d-9978-4cca-825d-341faab093cd\") " pod="openshift-marketplace/redhat-operators-jn27h" Feb 27 18:59:40 crc kubenswrapper[4981]: I0227 18:59:40.068857 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hbnj\" (UniqueName: \"kubernetes.io/projected/133ca76d-9978-4cca-825d-341faab093cd-kube-api-access-4hbnj\") pod \"redhat-operators-jn27h\" (UID: \"133ca76d-9978-4cca-825d-341faab093cd\") " pod="openshift-marketplace/redhat-operators-jn27h" Feb 27 18:59:40 crc kubenswrapper[4981]: I0227 18:59:40.169862 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133ca76d-9978-4cca-825d-341faab093cd-catalog-content\") pod \"redhat-operators-jn27h\" (UID: \"133ca76d-9978-4cca-825d-341faab093cd\") " pod="openshift-marketplace/redhat-operators-jn27h" Feb 27 18:59:40 crc kubenswrapper[4981]: I0227 18:59:40.169956 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133ca76d-9978-4cca-825d-341faab093cd-utilities\") pod \"redhat-operators-jn27h\" (UID: \"133ca76d-9978-4cca-825d-341faab093cd\") " pod="openshift-marketplace/redhat-operators-jn27h" Feb 27 18:59:40 crc kubenswrapper[4981]: I0227 18:59:40.170099 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hbnj\" (UniqueName: \"kubernetes.io/projected/133ca76d-9978-4cca-825d-341faab093cd-kube-api-access-4hbnj\") pod \"redhat-operators-jn27h\" (UID: \"133ca76d-9978-4cca-825d-341faab093cd\") " pod="openshift-marketplace/redhat-operators-jn27h" Feb 27 18:59:40 crc kubenswrapper[4981]: I0227 18:59:40.170753 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133ca76d-9978-4cca-825d-341faab093cd-catalog-content\") pod \"redhat-operators-jn27h\" (UID: \"133ca76d-9978-4cca-825d-341faab093cd\") " pod="openshift-marketplace/redhat-operators-jn27h" Feb 27 18:59:40 crc kubenswrapper[4981]: I0227 18:59:40.170793 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133ca76d-9978-4cca-825d-341faab093cd-utilities\") pod \"redhat-operators-jn27h\" (UID: \"133ca76d-9978-4cca-825d-341faab093cd\") " pod="openshift-marketplace/redhat-operators-jn27h" Feb 27 18:59:40 crc kubenswrapper[4981]: I0227 18:59:40.194360 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hbnj\" (UniqueName: \"kubernetes.io/projected/133ca76d-9978-4cca-825d-341faab093cd-kube-api-access-4hbnj\") pod \"redhat-operators-jn27h\" (UID: \"133ca76d-9978-4cca-825d-341faab093cd\") " pod="openshift-marketplace/redhat-operators-jn27h" Feb 27 18:59:40 crc kubenswrapper[4981]: I0227 18:59:40.210655 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jn27h" Feb 27 18:59:40 crc kubenswrapper[4981]: I0227 18:59:40.440643 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jn27h"] Feb 27 18:59:40 crc kubenswrapper[4981]: W0227 18:59:40.448418 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod133ca76d_9978_4cca_825d_341faab093cd.slice/crio-462d00ab05f962478580caf26239f41b0660c31dfe63488f39ed91ff2b47ac56 WatchSource:0}: Error finding container 462d00ab05f962478580caf26239f41b0660c31dfe63488f39ed91ff2b47ac56: Status 404 returned error can't find the container with id 462d00ab05f962478580caf26239f41b0660c31dfe63488f39ed91ff2b47ac56 Feb 27 18:59:41 crc kubenswrapper[4981]: I0227 18:59:41.415597 4981 generic.go:334] "Generic (PLEG): container finished" podID="3a16a4f3-0450-40f6-b7b9-26ce12441e3b" containerID="165e9f1fff3c2ea43b1d284ef54decc0049893b0b6c6effa67a338d2e6df29e5" exitCode=0 Feb 27 18:59:41 crc kubenswrapper[4981]: I0227 18:59:41.415723 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp" event={"ID":"3a16a4f3-0450-40f6-b7b9-26ce12441e3b","Type":"ContainerDied","Data":"165e9f1fff3c2ea43b1d284ef54decc0049893b0b6c6effa67a338d2e6df29e5"} Feb 27 18:59:41 crc kubenswrapper[4981]: I0227 18:59:41.418509 4981 generic.go:334] "Generic (PLEG): container finished" podID="133ca76d-9978-4cca-825d-341faab093cd" containerID="2ce7c10e40b757bd57144915837ab580e109dd8e37399d847908281a514baba9" exitCode=0 Feb 27 18:59:41 crc kubenswrapper[4981]: I0227 18:59:41.418540 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jn27h" event={"ID":"133ca76d-9978-4cca-825d-341faab093cd","Type":"ContainerDied","Data":"2ce7c10e40b757bd57144915837ab580e109dd8e37399d847908281a514baba9"} Feb 27 18:59:41 crc kubenswrapper[4981]: I0227 18:59:41.418560 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jn27h" event={"ID":"133ca76d-9978-4cca-825d-341faab093cd","Type":"ContainerStarted","Data":"462d00ab05f962478580caf26239f41b0660c31dfe63488f39ed91ff2b47ac56"} Feb 27 18:59:42 crc kubenswrapper[4981]: I0227 18:59:42.425212 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jn27h" event={"ID":"133ca76d-9978-4cca-825d-341faab093cd","Type":"ContainerStarted","Data":"e3cea1e907ad2bb4a81ae5ef0f9aff8365362ff2a01bd38f79f26850d1a61e8b"} Feb 27 18:59:43 crc kubenswrapper[4981]: I0227 18:59:43.433695 4981 generic.go:334] "Generic (PLEG): container finished" podID="133ca76d-9978-4cca-825d-341faab093cd" containerID="e3cea1e907ad2bb4a81ae5ef0f9aff8365362ff2a01bd38f79f26850d1a61e8b" exitCode=0 Feb 27 18:59:43 crc kubenswrapper[4981]: I0227 18:59:43.433826 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jn27h" event={"ID":"133ca76d-9978-4cca-825d-341faab093cd","Type":"ContainerDied","Data":"e3cea1e907ad2bb4a81ae5ef0f9aff8365362ff2a01bd38f79f26850d1a61e8b"} Feb 27 18:59:43 crc kubenswrapper[4981]: I0227 18:59:43.435959 4981 generic.go:334] "Generic (PLEG): container finished" podID="3a16a4f3-0450-40f6-b7b9-26ce12441e3b" containerID="5dc4e7a21ae16a5866ae909e1421a072f4c7ce079d94bb6f5cdff4c91c61cff7" exitCode=0 Feb 27 18:59:43 crc kubenswrapper[4981]: I0227 18:59:43.436007 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp" event={"ID":"3a16a4f3-0450-40f6-b7b9-26ce12441e3b","Type":"ContainerDied","Data":"5dc4e7a21ae16a5866ae909e1421a072f4c7ce079d94bb6f5cdff4c91c61cff7"} Feb 27 18:59:44 crc kubenswrapper[4981]: I0227 18:59:44.465867 4981 generic.go:334] "Generic (PLEG): container finished" podID="3a16a4f3-0450-40f6-b7b9-26ce12441e3b" containerID="8ddb5f566004c3ef106567cae6f30ecf667fcc4d276e7bff862ecb10c4c1dfe6" exitCode=0 Feb 27 18:59:44 crc kubenswrapper[4981]: I0227 18:59:44.465918 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp" event={"ID":"3a16a4f3-0450-40f6-b7b9-26ce12441e3b","Type":"ContainerDied","Data":"8ddb5f566004c3ef106567cae6f30ecf667fcc4d276e7bff862ecb10c4c1dfe6"} Feb 27 18:59:44 crc kubenswrapper[4981]: I0227 18:59:44.472242 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jn27h" event={"ID":"133ca76d-9978-4cca-825d-341faab093cd","Type":"ContainerStarted","Data":"5583d0ac4946c78bc2adf2e2ba66c3174d0694a348cc32d1817375c6339546e1"} Feb 27 18:59:44 crc kubenswrapper[4981]: I0227 18:59:44.512904 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jn27h" podStartSLOduration=2.869242554 podStartE2EDuration="5.512884151s" podCreationTimestamp="2026-02-27 18:59:39 +0000 UTC" firstStartedPulling="2026-02-27 18:59:41.420157928 +0000 UTC m=+880.898939088" lastFinishedPulling="2026-02-27 18:59:44.063799495 +0000 UTC m=+883.542580685" observedRunningTime="2026-02-27 18:59:44.509779516 +0000 UTC m=+883.988560686" watchObservedRunningTime="2026-02-27 18:59:44.512884151 +0000 UTC m=+883.991665321" Feb 27 18:59:45 crc kubenswrapper[4981]: I0227 18:59:45.778777 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp" Feb 27 18:59:45 crc kubenswrapper[4981]: I0227 18:59:45.951531 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqg2j\" (UniqueName: \"kubernetes.io/projected/3a16a4f3-0450-40f6-b7b9-26ce12441e3b-kube-api-access-nqg2j\") pod \"3a16a4f3-0450-40f6-b7b9-26ce12441e3b\" (UID: \"3a16a4f3-0450-40f6-b7b9-26ce12441e3b\") " Feb 27 18:59:45 crc kubenswrapper[4981]: I0227 18:59:45.951642 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a16a4f3-0450-40f6-b7b9-26ce12441e3b-bundle\") pod \"3a16a4f3-0450-40f6-b7b9-26ce12441e3b\" (UID: \"3a16a4f3-0450-40f6-b7b9-26ce12441e3b\") " Feb 27 18:59:45 crc kubenswrapper[4981]: I0227 18:59:45.951713 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a16a4f3-0450-40f6-b7b9-26ce12441e3b-util\") pod \"3a16a4f3-0450-40f6-b7b9-26ce12441e3b\" (UID: \"3a16a4f3-0450-40f6-b7b9-26ce12441e3b\") " Feb 27 18:59:45 crc kubenswrapper[4981]: I0227 18:59:45.952449 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a16a4f3-0450-40f6-b7b9-26ce12441e3b-bundle" (OuterVolumeSpecName: "bundle") pod "3a16a4f3-0450-40f6-b7b9-26ce12441e3b" (UID: "3a16a4f3-0450-40f6-b7b9-26ce12441e3b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:59:45 crc kubenswrapper[4981]: I0227 18:59:45.964474 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a16a4f3-0450-40f6-b7b9-26ce12441e3b-kube-api-access-nqg2j" (OuterVolumeSpecName: "kube-api-access-nqg2j") pod "3a16a4f3-0450-40f6-b7b9-26ce12441e3b" (UID: "3a16a4f3-0450-40f6-b7b9-26ce12441e3b"). InnerVolumeSpecName "kube-api-access-nqg2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:59:45 crc kubenswrapper[4981]: I0227 18:59:45.966883 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a16a4f3-0450-40f6-b7b9-26ce12441e3b-util" (OuterVolumeSpecName: "util") pod "3a16a4f3-0450-40f6-b7b9-26ce12441e3b" (UID: "3a16a4f3-0450-40f6-b7b9-26ce12441e3b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:59:46 crc kubenswrapper[4981]: I0227 18:59:46.053018 4981 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3a16a4f3-0450-40f6-b7b9-26ce12441e3b-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:46 crc kubenswrapper[4981]: I0227 18:59:46.053081 4981 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3a16a4f3-0450-40f6-b7b9-26ce12441e3b-util\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:46 crc kubenswrapper[4981]: I0227 18:59:46.053096 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqg2j\" (UniqueName: \"kubernetes.io/projected/3a16a4f3-0450-40f6-b7b9-26ce12441e3b-kube-api-access-nqg2j\") on node \"crc\" DevicePath \"\"" Feb 27 18:59:46 crc kubenswrapper[4981]: I0227 18:59:46.490955 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp" event={"ID":"3a16a4f3-0450-40f6-b7b9-26ce12441e3b","Type":"ContainerDied","Data":"ee1b603eb1d25822c6dcc39276db703e02dc3df8bcb9d2d332e72a03d7236bee"} Feb 27 18:59:46 crc kubenswrapper[4981]: I0227 18:59:46.491014 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee1b603eb1d25822c6dcc39276db703e02dc3df8bcb9d2d332e72a03d7236bee" Feb 27 18:59:46 crc kubenswrapper[4981]: I0227 18:59:46.491045 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp" Feb 27 18:59:48 crc kubenswrapper[4981]: I0227 18:59:48.021789 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-kxr86"] Feb 27 18:59:48 crc kubenswrapper[4981]: E0227 18:59:48.022037 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a16a4f3-0450-40f6-b7b9-26ce12441e3b" containerName="extract" Feb 27 18:59:48 crc kubenswrapper[4981]: I0227 18:59:48.022083 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a16a4f3-0450-40f6-b7b9-26ce12441e3b" containerName="extract" Feb 27 18:59:48 crc kubenswrapper[4981]: E0227 18:59:48.022103 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a16a4f3-0450-40f6-b7b9-26ce12441e3b" containerName="pull" Feb 27 18:59:48 crc kubenswrapper[4981]: I0227 18:59:48.022109 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a16a4f3-0450-40f6-b7b9-26ce12441e3b" containerName="pull" Feb 27 18:59:48 crc kubenswrapper[4981]: E0227 18:59:48.022117 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a16a4f3-0450-40f6-b7b9-26ce12441e3b" containerName="util" Feb 27 18:59:48 crc kubenswrapper[4981]: I0227 18:59:48.022124 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a16a4f3-0450-40f6-b7b9-26ce12441e3b" containerName="util" Feb 27 18:59:48 crc kubenswrapper[4981]: I0227 18:59:48.022218 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a16a4f3-0450-40f6-b7b9-26ce12441e3b" containerName="extract" Feb 27 18:59:48 crc kubenswrapper[4981]: I0227 18:59:48.022647 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-kxr86" Feb 27 18:59:48 crc kubenswrapper[4981]: I0227 18:59:48.028255 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-lm6kg" Feb 27 18:59:48 crc kubenswrapper[4981]: I0227 18:59:48.036594 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 27 18:59:48 crc kubenswrapper[4981]: I0227 18:59:48.036781 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 27 18:59:48 crc kubenswrapper[4981]: I0227 18:59:48.058812 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-kxr86"] Feb 27 18:59:48 crc kubenswrapper[4981]: I0227 18:59:48.208234 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8rl7\" (UniqueName: \"kubernetes.io/projected/8b932696-4ff1-477c-a0d4-710f47224107-kube-api-access-m8rl7\") pod \"nmstate-operator-75c5dccd6c-kxr86\" (UID: \"8b932696-4ff1-477c-a0d4-710f47224107\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-kxr86" Feb 27 18:59:48 crc kubenswrapper[4981]: I0227 18:59:48.310115 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8rl7\" (UniqueName: \"kubernetes.io/projected/8b932696-4ff1-477c-a0d4-710f47224107-kube-api-access-m8rl7\") pod \"nmstate-operator-75c5dccd6c-kxr86\" (UID: \"8b932696-4ff1-477c-a0d4-710f47224107\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-kxr86" Feb 27 18:59:48 crc kubenswrapper[4981]: I0227 18:59:48.343882 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8rl7\" (UniqueName: \"kubernetes.io/projected/8b932696-4ff1-477c-a0d4-710f47224107-kube-api-access-m8rl7\") pod \"nmstate-operator-75c5dccd6c-kxr86\" (UID: \"8b932696-4ff1-477c-a0d4-710f47224107\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-kxr86" Feb 27 18:59:48 crc kubenswrapper[4981]: I0227 18:59:48.639856 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-kxr86" Feb 27 18:59:49 crc kubenswrapper[4981]: I0227 18:59:49.115357 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-kxr86"] Feb 27 18:59:49 crc kubenswrapper[4981]: W0227 18:59:49.124239 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b932696_4ff1_477c_a0d4_710f47224107.slice/crio-b809dffbd57512201e105e9fa04dd22bf2a972ead7486201e725ca236bbacfdd WatchSource:0}: Error finding container b809dffbd57512201e105e9fa04dd22bf2a972ead7486201e725ca236bbacfdd: Status 404 returned error can't find the container with id b809dffbd57512201e105e9fa04dd22bf2a972ead7486201e725ca236bbacfdd Feb 27 18:59:49 crc kubenswrapper[4981]: I0227 18:59:49.527669 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-kxr86" event={"ID":"8b932696-4ff1-477c-a0d4-710f47224107","Type":"ContainerStarted","Data":"b809dffbd57512201e105e9fa04dd22bf2a972ead7486201e725ca236bbacfdd"} Feb 27 18:59:49 crc kubenswrapper[4981]: I0227 18:59:49.872442 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qj99b"] Feb 27 18:59:49 crc kubenswrapper[4981]: I0227 18:59:49.874151 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qj99b" Feb 27 18:59:49 crc kubenswrapper[4981]: I0227 18:59:49.886150 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qj99b"] Feb 27 18:59:50 crc kubenswrapper[4981]: I0227 18:59:50.030079 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxsn9\" (UniqueName: \"kubernetes.io/projected/d889c4d8-3c9b-41b4-be84-4026b0967d12-kube-api-access-vxsn9\") pod \"community-operators-qj99b\" (UID: \"d889c4d8-3c9b-41b4-be84-4026b0967d12\") " pod="openshift-marketplace/community-operators-qj99b" Feb 27 18:59:50 crc kubenswrapper[4981]: I0227 18:59:50.030196 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d889c4d8-3c9b-41b4-be84-4026b0967d12-utilities\") pod \"community-operators-qj99b\" (UID: \"d889c4d8-3c9b-41b4-be84-4026b0967d12\") " pod="openshift-marketplace/community-operators-qj99b" Feb 27 18:59:50 crc kubenswrapper[4981]: I0227 18:59:50.031181 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d889c4d8-3c9b-41b4-be84-4026b0967d12-catalog-content\") pod \"community-operators-qj99b\" (UID: \"d889c4d8-3c9b-41b4-be84-4026b0967d12\") " pod="openshift-marketplace/community-operators-qj99b" Feb 27 18:59:50 crc kubenswrapper[4981]: I0227 18:59:50.132405 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxsn9\" (UniqueName: \"kubernetes.io/projected/d889c4d8-3c9b-41b4-be84-4026b0967d12-kube-api-access-vxsn9\") pod \"community-operators-qj99b\" (UID: \"d889c4d8-3c9b-41b4-be84-4026b0967d12\") " pod="openshift-marketplace/community-operators-qj99b" Feb 27 18:59:50 crc kubenswrapper[4981]: I0227 18:59:50.132511 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d889c4d8-3c9b-41b4-be84-4026b0967d12-utilities\") pod \"community-operators-qj99b\" (UID: \"d889c4d8-3c9b-41b4-be84-4026b0967d12\") " pod="openshift-marketplace/community-operators-qj99b" Feb 27 18:59:50 crc kubenswrapper[4981]: I0227 18:59:50.132566 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d889c4d8-3c9b-41b4-be84-4026b0967d12-catalog-content\") pod \"community-operators-qj99b\" (UID: \"d889c4d8-3c9b-41b4-be84-4026b0967d12\") " pod="openshift-marketplace/community-operators-qj99b" Feb 27 18:59:50 crc kubenswrapper[4981]: I0227 18:59:50.133385 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d889c4d8-3c9b-41b4-be84-4026b0967d12-catalog-content\") pod \"community-operators-qj99b\" (UID: \"d889c4d8-3c9b-41b4-be84-4026b0967d12\") " pod="openshift-marketplace/community-operators-qj99b" Feb 27 18:59:50 crc kubenswrapper[4981]: I0227 18:59:50.133586 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d889c4d8-3c9b-41b4-be84-4026b0967d12-utilities\") pod \"community-operators-qj99b\" (UID: \"d889c4d8-3c9b-41b4-be84-4026b0967d12\") " pod="openshift-marketplace/community-operators-qj99b" Feb 27 18:59:50 crc kubenswrapper[4981]: I0227 18:59:50.156209 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxsn9\" (UniqueName: \"kubernetes.io/projected/d889c4d8-3c9b-41b4-be84-4026b0967d12-kube-api-access-vxsn9\") pod \"community-operators-qj99b\" (UID: \"d889c4d8-3c9b-41b4-be84-4026b0967d12\") " pod="openshift-marketplace/community-operators-qj99b" Feb 27 18:59:50 crc kubenswrapper[4981]: I0227 18:59:50.203707 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qj99b" Feb 27 18:59:50 crc kubenswrapper[4981]: I0227 18:59:50.211152 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jn27h" Feb 27 18:59:50 crc kubenswrapper[4981]: I0227 18:59:50.211236 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jn27h" Feb 27 18:59:50 crc kubenswrapper[4981]: I0227 18:59:50.416850 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qj99b"] Feb 27 18:59:50 crc kubenswrapper[4981]: W0227 18:59:50.426830 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd889c4d8_3c9b_41b4_be84_4026b0967d12.slice/crio-ef2efb8acda9dc7f99e94a0e148178979b7d20d26035569d7cace4c3cc4356ec WatchSource:0}: Error finding container ef2efb8acda9dc7f99e94a0e148178979b7d20d26035569d7cace4c3cc4356ec: Status 404 returned error can't find the container with id ef2efb8acda9dc7f99e94a0e148178979b7d20d26035569d7cace4c3cc4356ec Feb 27 18:59:50 crc kubenswrapper[4981]: I0227 18:59:50.533000 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qj99b" event={"ID":"d889c4d8-3c9b-41b4-be84-4026b0967d12","Type":"ContainerStarted","Data":"ef2efb8acda9dc7f99e94a0e148178979b7d20d26035569d7cace4c3cc4356ec"} Feb 27 18:59:51 crc kubenswrapper[4981]: I0227 18:59:51.258919 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jn27h" podUID="133ca76d-9978-4cca-825d-341faab093cd" containerName="registry-server" probeResult="failure" output=< Feb 27 18:59:51 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Feb 27 18:59:51 crc kubenswrapper[4981]: > Feb 27 18:59:55 crc kubenswrapper[4981]: I0227 18:59:55.571228 4981 generic.go:334] "Generic (PLEG): container finished" podID="d889c4d8-3c9b-41b4-be84-4026b0967d12" containerID="7740ea2d053fc4d46fd9961be74f3c74af86577590daf36dd437578f4f810224" exitCode=0 Feb 27 18:59:55 crc kubenswrapper[4981]: I0227 18:59:55.571327 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qj99b" event={"ID":"d889c4d8-3c9b-41b4-be84-4026b0967d12","Type":"ContainerDied","Data":"7740ea2d053fc4d46fd9961be74f3c74af86577590daf36dd437578f4f810224"} Feb 27 18:59:59 crc kubenswrapper[4981]: I0227 18:59:59.638940 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qj99b" event={"ID":"d889c4d8-3c9b-41b4-be84-4026b0967d12","Type":"ContainerStarted","Data":"cd7ce321211ff43eadf659f5e81416e69679ad16e560bde720341817a0aa0794"} Feb 27 18:59:59 crc kubenswrapper[4981]: I0227 18:59:59.639708 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-kxr86" event={"ID":"8b932696-4ff1-477c-a0d4-710f47224107","Type":"ContainerStarted","Data":"bc6d5e6b5e4b0cddee0a8dd4a3f0a966c10a9d5c48cb7b55a656f16dc409815b"} Feb 27 18:59:59 crc kubenswrapper[4981]: I0227 18:59:59.648734 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-kxr86" podStartSLOduration=1.3397484849999999 podStartE2EDuration="11.648715478s" podCreationTimestamp="2026-02-27 18:59:48 +0000 UTC" firstStartedPulling="2026-02-27 18:59:49.125842685 +0000 UTC m=+888.604623845" lastFinishedPulling="2026-02-27 18:59:59.434809678 +0000 UTC m=+898.913590838" observedRunningTime="2026-02-27 18:59:59.647513531 +0000 UTC m=+899.126294731" watchObservedRunningTime="2026-02-27 18:59:59.648715478 +0000 UTC m=+899.127496668" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.155021 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536980-lq4dg"] Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.155786 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536980-lq4dg" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.162211 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.163249 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.164555 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.171906 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536980-lq4dg"] Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.209112 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wwrr\" (UniqueName: \"kubernetes.io/projected/46100661-2b00-441b-a3e6-394279e60051-kube-api-access-8wwrr\") pod \"auto-csr-approver-29536980-lq4dg\" (UID: \"46100661-2b00-441b-a3e6-394279e60051\") " pod="openshift-infra/auto-csr-approver-29536980-lq4dg" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.250943 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8"] Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.251612 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.253213 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.253495 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.261834 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8"] Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.281664 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jn27h" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.310323 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wwrr\" (UniqueName: \"kubernetes.io/projected/46100661-2b00-441b-a3e6-394279e60051-kube-api-access-8wwrr\") pod \"auto-csr-approver-29536980-lq4dg\" (UID: \"46100661-2b00-441b-a3e6-394279e60051\") " pod="openshift-infra/auto-csr-approver-29536980-lq4dg" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.332832 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wwrr\" (UniqueName: \"kubernetes.io/projected/46100661-2b00-441b-a3e6-394279e60051-kube-api-access-8wwrr\") pod \"auto-csr-approver-29536980-lq4dg\" (UID: \"46100661-2b00-441b-a3e6-394279e60051\") " pod="openshift-infra/auto-csr-approver-29536980-lq4dg" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.345283 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jn27h" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.412094 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgjbq\" (UniqueName: \"kubernetes.io/projected/aa7d7b4b-cc16-4da4-94d1-4daae958bacf-kube-api-access-qgjbq\") pod \"collect-profiles-29536980-bf5w8\" (UID: \"aa7d7b4b-cc16-4da4-94d1-4daae958bacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.412210 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa7d7b4b-cc16-4da4-94d1-4daae958bacf-secret-volume\") pod \"collect-profiles-29536980-bf5w8\" (UID: \"aa7d7b4b-cc16-4da4-94d1-4daae958bacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.412269 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa7d7b4b-cc16-4da4-94d1-4daae958bacf-config-volume\") pod \"collect-profiles-29536980-bf5w8\" (UID: \"aa7d7b4b-cc16-4da4-94d1-4daae958bacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.477599 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536980-lq4dg" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.513210 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgjbq\" (UniqueName: \"kubernetes.io/projected/aa7d7b4b-cc16-4da4-94d1-4daae958bacf-kube-api-access-qgjbq\") pod \"collect-profiles-29536980-bf5w8\" (UID: \"aa7d7b4b-cc16-4da4-94d1-4daae958bacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.513610 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa7d7b4b-cc16-4da4-94d1-4daae958bacf-secret-volume\") pod \"collect-profiles-29536980-bf5w8\" (UID: \"aa7d7b4b-cc16-4da4-94d1-4daae958bacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.513669 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa7d7b4b-cc16-4da4-94d1-4daae958bacf-config-volume\") pod \"collect-profiles-29536980-bf5w8\" (UID: \"aa7d7b4b-cc16-4da4-94d1-4daae958bacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.514772 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa7d7b4b-cc16-4da4-94d1-4daae958bacf-config-volume\") pod \"collect-profiles-29536980-bf5w8\" (UID: \"aa7d7b4b-cc16-4da4-94d1-4daae958bacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.521166 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa7d7b4b-cc16-4da4-94d1-4daae958bacf-secret-volume\") pod \"collect-profiles-29536980-bf5w8\" (UID: \"aa7d7b4b-cc16-4da4-94d1-4daae958bacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.538216 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgjbq\" (UniqueName: \"kubernetes.io/projected/aa7d7b4b-cc16-4da4-94d1-4daae958bacf-kube-api-access-qgjbq\") pod \"collect-profiles-29536980-bf5w8\" (UID: \"aa7d7b4b-cc16-4da4-94d1-4daae958bacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.569301 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.645515 4981 generic.go:334] "Generic (PLEG): container finished" podID="d889c4d8-3c9b-41b4-be84-4026b0967d12" containerID="cd7ce321211ff43eadf659f5e81416e69679ad16e560bde720341817a0aa0794" exitCode=0 Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.646277 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qj99b" event={"ID":"d889c4d8-3c9b-41b4-be84-4026b0967d12","Type":"ContainerDied","Data":"cd7ce321211ff43eadf659f5e81416e69679ad16e560bde720341817a0aa0794"} Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.685468 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-s77xl"] Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.686240 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-s77xl" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.689014 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-p2mhz" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.709190 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-hqtml"] Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.710209 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-hqtml" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.713099 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.720628 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-hqtml"] Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.725202 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-5g6ww"] Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.735021 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5g6ww" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.743582 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-s77xl"] Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.754946 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536980-lq4dg"] Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.817254 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm6b2\" (UniqueName: \"kubernetes.io/projected/fecb7b86-12e2-42ad-af13-8d5b3cbcae05-kube-api-access-gm6b2\") pod \"nmstate-webhook-786f45cff4-hqtml\" (UID: \"fecb7b86-12e2-42ad-af13-8d5b3cbcae05\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-hqtml" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.817295 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjnck\" (UniqueName: \"kubernetes.io/projected/59ad1269-77a3-4b47-833b-60da24bbe283-kube-api-access-bjnck\") pod \"nmstate-metrics-69594cc75-s77xl\" (UID: \"59ad1269-77a3-4b47-833b-60da24bbe283\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-s77xl" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.817361 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fecb7b86-12e2-42ad-af13-8d5b3cbcae05-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-hqtml\" (UID: \"fecb7b86-12e2-42ad-af13-8d5b3cbcae05\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-hqtml" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.830084 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4fnww"] Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.830736 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4fnww" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.834540 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.834651 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-flpz6" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.834661 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.843428 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4fnww"] Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.918651 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/252e90b6-bbc9-40ed-a7f6-df2cd8bec420-dbus-socket\") pod \"nmstate-handler-5g6ww\" (UID: \"252e90b6-bbc9-40ed-a7f6-df2cd8bec420\") " pod="openshift-nmstate/nmstate-handler-5g6ww" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.918709 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fecb7b86-12e2-42ad-af13-8d5b3cbcae05-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-hqtml\" (UID: \"fecb7b86-12e2-42ad-af13-8d5b3cbcae05\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-hqtml" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.918791 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/252e90b6-bbc9-40ed-a7f6-df2cd8bec420-ovs-socket\") pod \"nmstate-handler-5g6ww\" (UID: \"252e90b6-bbc9-40ed-a7f6-df2cd8bec420\") " pod="openshift-nmstate/nmstate-handler-5g6ww" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.918832 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm6b2\" (UniqueName: \"kubernetes.io/projected/fecb7b86-12e2-42ad-af13-8d5b3cbcae05-kube-api-access-gm6b2\") pod \"nmstate-webhook-786f45cff4-hqtml\" (UID: \"fecb7b86-12e2-42ad-af13-8d5b3cbcae05\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-hqtml" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.918859 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjnck\" (UniqueName: \"kubernetes.io/projected/59ad1269-77a3-4b47-833b-60da24bbe283-kube-api-access-bjnck\") pod \"nmstate-metrics-69594cc75-s77xl\" (UID: \"59ad1269-77a3-4b47-833b-60da24bbe283\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-s77xl" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.918892 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s88t9\" (UniqueName: \"kubernetes.io/projected/252e90b6-bbc9-40ed-a7f6-df2cd8bec420-kube-api-access-s88t9\") pod \"nmstate-handler-5g6ww\" (UID: \"252e90b6-bbc9-40ed-a7f6-df2cd8bec420\") " pod="openshift-nmstate/nmstate-handler-5g6ww" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.918938 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/252e90b6-bbc9-40ed-a7f6-df2cd8bec420-nmstate-lock\") pod \"nmstate-handler-5g6ww\" (UID: \"252e90b6-bbc9-40ed-a7f6-df2cd8bec420\") " pod="openshift-nmstate/nmstate-handler-5g6ww" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.927246 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fecb7b86-12e2-42ad-af13-8d5b3cbcae05-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-hqtml\" (UID: \"fecb7b86-12e2-42ad-af13-8d5b3cbcae05\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-hqtml" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.941122 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm6b2\" (UniqueName: \"kubernetes.io/projected/fecb7b86-12e2-42ad-af13-8d5b3cbcae05-kube-api-access-gm6b2\") pod \"nmstate-webhook-786f45cff4-hqtml\" (UID: \"fecb7b86-12e2-42ad-af13-8d5b3cbcae05\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-hqtml" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.943217 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjnck\" (UniqueName: \"kubernetes.io/projected/59ad1269-77a3-4b47-833b-60da24bbe283-kube-api-access-bjnck\") pod \"nmstate-metrics-69594cc75-s77xl\" (UID: \"59ad1269-77a3-4b47-833b-60da24bbe283\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-s77xl" Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.999180 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-679b7bfb6c-6ncth"] Feb 27 19:00:00 crc kubenswrapper[4981]: I0227 19:00:00.999793 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.019298 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-s77xl" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.019511 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/252e90b6-bbc9-40ed-a7f6-df2cd8bec420-dbus-socket\") pod \"nmstate-handler-5g6ww\" (UID: \"252e90b6-bbc9-40ed-a7f6-df2cd8bec420\") " pod="openshift-nmstate/nmstate-handler-5g6ww" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.019570 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bacb7ae1-b5de-400a-9182-850c94bff2ac-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-4fnww\" (UID: \"bacb7ae1-b5de-400a-9182-850c94bff2ac\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4fnww" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.019613 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/252e90b6-bbc9-40ed-a7f6-df2cd8bec420-ovs-socket\") pod \"nmstate-handler-5g6ww\" (UID: \"252e90b6-bbc9-40ed-a7f6-df2cd8bec420\") " pod="openshift-nmstate/nmstate-handler-5g6ww" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.019642 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87xbh\" (UniqueName: \"kubernetes.io/projected/bacb7ae1-b5de-400a-9182-850c94bff2ac-kube-api-access-87xbh\") pod \"nmstate-console-plugin-5dcbbd79cf-4fnww\" (UID: \"bacb7ae1-b5de-400a-9182-850c94bff2ac\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4fnww" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.019668 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s88t9\" (UniqueName: \"kubernetes.io/projected/252e90b6-bbc9-40ed-a7f6-df2cd8bec420-kube-api-access-s88t9\") pod \"nmstate-handler-5g6ww\" (UID: \"252e90b6-bbc9-40ed-a7f6-df2cd8bec420\") " pod="openshift-nmstate/nmstate-handler-5g6ww" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.019697 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/252e90b6-bbc9-40ed-a7f6-df2cd8bec420-nmstate-lock\") pod \"nmstate-handler-5g6ww\" (UID: \"252e90b6-bbc9-40ed-a7f6-df2cd8bec420\") " pod="openshift-nmstate/nmstate-handler-5g6ww" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.019717 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bacb7ae1-b5de-400a-9182-850c94bff2ac-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-4fnww\" (UID: \"bacb7ae1-b5de-400a-9182-850c94bff2ac\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4fnww" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.019812 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/252e90b6-bbc9-40ed-a7f6-df2cd8bec420-ovs-socket\") pod \"nmstate-handler-5g6ww\" (UID: \"252e90b6-bbc9-40ed-a7f6-df2cd8bec420\") " pod="openshift-nmstate/nmstate-handler-5g6ww" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.019850 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/252e90b6-bbc9-40ed-a7f6-df2cd8bec420-nmstate-lock\") pod \"nmstate-handler-5g6ww\" (UID: \"252e90b6-bbc9-40ed-a7f6-df2cd8bec420\") " pod="openshift-nmstate/nmstate-handler-5g6ww" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.019854 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/252e90b6-bbc9-40ed-a7f6-df2cd8bec420-dbus-socket\") pod \"nmstate-handler-5g6ww\" (UID: \"252e90b6-bbc9-40ed-a7f6-df2cd8bec420\") " pod="openshift-nmstate/nmstate-handler-5g6ww" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.061424 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-hqtml" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.062279 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s88t9\" (UniqueName: \"kubernetes.io/projected/252e90b6-bbc9-40ed-a7f6-df2cd8bec420-kube-api-access-s88t9\") pod \"nmstate-handler-5g6ww\" (UID: \"252e90b6-bbc9-40ed-a7f6-df2cd8bec420\") " pod="openshift-nmstate/nmstate-handler-5g6ww" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.081070 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-679b7bfb6c-6ncth"] Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.082587 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5g6ww" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.085408 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jn27h"] Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.103675 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8"] Feb 27 19:00:01 crc kubenswrapper[4981]: W0227 19:00:01.113249 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod252e90b6_bbc9_40ed_a7f6_df2cd8bec420.slice/crio-3713f198971d0821537aa3f8e9f07c7904cde2dc35df82b2cd7bb67b12968d0f WatchSource:0}: Error finding container 3713f198971d0821537aa3f8e9f07c7904cde2dc35df82b2cd7bb67b12968d0f: Status 404 returned error can't find the container with id 3713f198971d0821537aa3f8e9f07c7904cde2dc35df82b2cd7bb67b12968d0f Feb 27 19:00:01 crc kubenswrapper[4981]: W0227 19:00:01.114966 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa7d7b4b_cc16_4da4_94d1_4daae958bacf.slice/crio-06df6b2153a5c0638b3dff929cd2c8e21ec90ac0290ef402d85bd71b55262a1f WatchSource:0}: Error finding container 06df6b2153a5c0638b3dff929cd2c8e21ec90ac0290ef402d85bd71b55262a1f: Status 404 returned error can't find the container with id 06df6b2153a5c0638b3dff929cd2c8e21ec90ac0290ef402d85bd71b55262a1f Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.120542 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6619b1c1-dd68-4fa1-b77a-68c456cc557e-console-serving-cert\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.120612 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bacb7ae1-b5de-400a-9182-850c94bff2ac-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-4fnww\" (UID: \"bacb7ae1-b5de-400a-9182-850c94bff2ac\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4fnww" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.120632 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6619b1c1-dd68-4fa1-b77a-68c456cc557e-trusted-ca-bundle\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.120660 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6619b1c1-dd68-4fa1-b77a-68c456cc557e-oauth-serving-cert\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.120692 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6619b1c1-dd68-4fa1-b77a-68c456cc557e-console-config\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.120708 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6619b1c1-dd68-4fa1-b77a-68c456cc557e-service-ca\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.120731 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bacb7ae1-b5de-400a-9182-850c94bff2ac-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-4fnww\" (UID: \"bacb7ae1-b5de-400a-9182-850c94bff2ac\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4fnww" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.120785 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87xbh\" (UniqueName: \"kubernetes.io/projected/bacb7ae1-b5de-400a-9182-850c94bff2ac-kube-api-access-87xbh\") pod \"nmstate-console-plugin-5dcbbd79cf-4fnww\" (UID: \"bacb7ae1-b5de-400a-9182-850c94bff2ac\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4fnww" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.121397 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6619b1c1-dd68-4fa1-b77a-68c456cc557e-console-oauth-config\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.121422 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqk6g\" (UniqueName: \"kubernetes.io/projected/6619b1c1-dd68-4fa1-b77a-68c456cc557e-kube-api-access-wqk6g\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.121832 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bacb7ae1-b5de-400a-9182-850c94bff2ac-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-4fnww\" (UID: \"bacb7ae1-b5de-400a-9182-850c94bff2ac\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4fnww" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.128329 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bacb7ae1-b5de-400a-9182-850c94bff2ac-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-4fnww\" (UID: \"bacb7ae1-b5de-400a-9182-850c94bff2ac\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4fnww" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.145244 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87xbh\" (UniqueName: \"kubernetes.io/projected/bacb7ae1-b5de-400a-9182-850c94bff2ac-kube-api-access-87xbh\") pod \"nmstate-console-plugin-5dcbbd79cf-4fnww\" (UID: \"bacb7ae1-b5de-400a-9182-850c94bff2ac\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4fnww" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.146517 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4fnww" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.222207 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6619b1c1-dd68-4fa1-b77a-68c456cc557e-console-oauth-config\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.222265 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqk6g\" (UniqueName: \"kubernetes.io/projected/6619b1c1-dd68-4fa1-b77a-68c456cc557e-kube-api-access-wqk6g\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.222284 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6619b1c1-dd68-4fa1-b77a-68c456cc557e-console-serving-cert\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.222337 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6619b1c1-dd68-4fa1-b77a-68c456cc557e-trusted-ca-bundle\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.222364 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6619b1c1-dd68-4fa1-b77a-68c456cc557e-oauth-serving-cert\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.222396 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6619b1c1-dd68-4fa1-b77a-68c456cc557e-console-config\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.222415 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6619b1c1-dd68-4fa1-b77a-68c456cc557e-service-ca\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.223500 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6619b1c1-dd68-4fa1-b77a-68c456cc557e-console-config\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.223714 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6619b1c1-dd68-4fa1-b77a-68c456cc557e-oauth-serving-cert\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.224167 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6619b1c1-dd68-4fa1-b77a-68c456cc557e-trusted-ca-bundle\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.226505 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6619b1c1-dd68-4fa1-b77a-68c456cc557e-service-ca\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.228864 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6619b1c1-dd68-4fa1-b77a-68c456cc557e-console-serving-cert\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.229278 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6619b1c1-dd68-4fa1-b77a-68c456cc557e-console-oauth-config\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.239635 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqk6g\" (UniqueName: \"kubernetes.io/projected/6619b1c1-dd68-4fa1-b77a-68c456cc557e-kube-api-access-wqk6g\") pod \"console-679b7bfb6c-6ncth\" (UID: \"6619b1c1-dd68-4fa1-b77a-68c456cc557e\") " pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: W0227 19:00:01.274460 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59ad1269_77a3_4b47_833b_60da24bbe283.slice/crio-66f2af711881460181bf91dd97cdf8f9bf5495b2e6abb2378b9426697d14a72b WatchSource:0}: Error finding container 66f2af711881460181bf91dd97cdf8f9bf5495b2e6abb2378b9426697d14a72b: Status 404 returned error can't find the container with id 66f2af711881460181bf91dd97cdf8f9bf5495b2e6abb2378b9426697d14a72b Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.276528 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-s77xl"] Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.283005 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-hqtml"] Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.311819 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.484394 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-679b7bfb6c-6ncth"] Feb 27 19:00:01 crc kubenswrapper[4981]: W0227 19:00:01.504783 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6619b1c1_dd68_4fa1_b77a_68c456cc557e.slice/crio-4b1ae1a1bef1c1ee6b66fea380a37a62937767e9cd2069790cf3c4ca352b30b1 WatchSource:0}: Error finding container 4b1ae1a1bef1c1ee6b66fea380a37a62937767e9cd2069790cf3c4ca352b30b1: Status 404 returned error can't find the container with id 4b1ae1a1bef1c1ee6b66fea380a37a62937767e9cd2069790cf3c4ca352b30b1 Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.544522 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4fnww"] Feb 27 19:00:01 crc kubenswrapper[4981]: W0227 19:00:01.554087 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbacb7ae1_b5de_400a_9182_850c94bff2ac.slice/crio-1c02fc32ece1691c95f9525be03079b378e0304688a9914121bf7781bd304b4a WatchSource:0}: Error finding container 1c02fc32ece1691c95f9525be03079b378e0304688a9914121bf7781bd304b4a: Status 404 returned error can't find the container with id 1c02fc32ece1691c95f9525be03079b378e0304688a9914121bf7781bd304b4a Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.662216 4981 generic.go:334] "Generic (PLEG): container finished" podID="aa7d7b4b-cc16-4da4-94d1-4daae958bacf" containerID="f493742b48afcf508ee719d3fbf4500365b35a50a93d137536fdaca1cd73f69c" exitCode=0 Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.662300 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8" event={"ID":"aa7d7b4b-cc16-4da4-94d1-4daae958bacf","Type":"ContainerDied","Data":"f493742b48afcf508ee719d3fbf4500365b35a50a93d137536fdaca1cd73f69c"} Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.662326 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8" event={"ID":"aa7d7b4b-cc16-4da4-94d1-4daae958bacf","Type":"ContainerStarted","Data":"06df6b2153a5c0638b3dff929cd2c8e21ec90ac0290ef402d85bd71b55262a1f"} Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.667841 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-s77xl" event={"ID":"59ad1269-77a3-4b47-833b-60da24bbe283","Type":"ContainerStarted","Data":"66f2af711881460181bf91dd97cdf8f9bf5495b2e6abb2378b9426697d14a72b"} Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.669669 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-hqtml" event={"ID":"fecb7b86-12e2-42ad-af13-8d5b3cbcae05","Type":"ContainerStarted","Data":"c3d24b0989cc56bd514794bbea7d2be21f2e3b4b0a07aebd2772b6e9f5c8c887"} Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.674108 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536980-lq4dg" event={"ID":"46100661-2b00-441b-a3e6-394279e60051","Type":"ContainerStarted","Data":"d1458402da47bb51f935e805d5b7b556bc829df79b60936862c6d51af42e874a"} Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.676135 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4fnww" event={"ID":"bacb7ae1-b5de-400a-9182-850c94bff2ac","Type":"ContainerStarted","Data":"1c02fc32ece1691c95f9525be03079b378e0304688a9914121bf7781bd304b4a"} Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.681756 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-679b7bfb6c-6ncth" event={"ID":"6619b1c1-dd68-4fa1-b77a-68c456cc557e","Type":"ContainerStarted","Data":"4b1ae1a1bef1c1ee6b66fea380a37a62937767e9cd2069790cf3c4ca352b30b1"} Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.688043 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5g6ww" event={"ID":"252e90b6-bbc9-40ed-a7f6-df2cd8bec420","Type":"ContainerStarted","Data":"3713f198971d0821537aa3f8e9f07c7904cde2dc35df82b2cd7bb67b12968d0f"} Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.688266 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jn27h" podUID="133ca76d-9978-4cca-825d-341faab093cd" containerName="registry-server" containerID="cri-o://5583d0ac4946c78bc2adf2e2ba66c3174d0694a348cc32d1817375c6339546e1" gracePeriod=2 Feb 27 19:00:01 crc kubenswrapper[4981]: I0227 19:00:01.785190 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-679b7bfb6c-6ncth" podStartSLOduration=1.785168318 podStartE2EDuration="1.785168318s" podCreationTimestamp="2026-02-27 19:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:00:01.783429564 +0000 UTC m=+901.262210734" watchObservedRunningTime="2026-02-27 19:00:01.785168318 +0000 UTC m=+901.263949478" Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.075943 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jn27h" Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.239327 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133ca76d-9978-4cca-825d-341faab093cd-catalog-content\") pod \"133ca76d-9978-4cca-825d-341faab093cd\" (UID: \"133ca76d-9978-4cca-825d-341faab093cd\") " Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.239533 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hbnj\" (UniqueName: \"kubernetes.io/projected/133ca76d-9978-4cca-825d-341faab093cd-kube-api-access-4hbnj\") pod \"133ca76d-9978-4cca-825d-341faab093cd\" (UID: \"133ca76d-9978-4cca-825d-341faab093cd\") " Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.239778 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133ca76d-9978-4cca-825d-341faab093cd-utilities\") pod \"133ca76d-9978-4cca-825d-341faab093cd\" (UID: \"133ca76d-9978-4cca-825d-341faab093cd\") " Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.242334 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/133ca76d-9978-4cca-825d-341faab093cd-utilities" (OuterVolumeSpecName: "utilities") pod "133ca76d-9978-4cca-825d-341faab093cd" (UID: "133ca76d-9978-4cca-825d-341faab093cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.250398 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/133ca76d-9978-4cca-825d-341faab093cd-kube-api-access-4hbnj" (OuterVolumeSpecName: "kube-api-access-4hbnj") pod "133ca76d-9978-4cca-825d-341faab093cd" (UID: "133ca76d-9978-4cca-825d-341faab093cd"). InnerVolumeSpecName "kube-api-access-4hbnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.342232 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/133ca76d-9978-4cca-825d-341faab093cd-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.342652 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hbnj\" (UniqueName: \"kubernetes.io/projected/133ca76d-9978-4cca-825d-341faab093cd-kube-api-access-4hbnj\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.441861 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/133ca76d-9978-4cca-825d-341faab093cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "133ca76d-9978-4cca-825d-341faab093cd" (UID: "133ca76d-9978-4cca-825d-341faab093cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.444260 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/133ca76d-9978-4cca-825d-341faab093cd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.701397 4981 generic.go:334] "Generic (PLEG): container finished" podID="133ca76d-9978-4cca-825d-341faab093cd" containerID="5583d0ac4946c78bc2adf2e2ba66c3174d0694a348cc32d1817375c6339546e1" exitCode=0 Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.701491 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jn27h" event={"ID":"133ca76d-9978-4cca-825d-341faab093cd","Type":"ContainerDied","Data":"5583d0ac4946c78bc2adf2e2ba66c3174d0694a348cc32d1817375c6339546e1"} Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.701524 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jn27h" event={"ID":"133ca76d-9978-4cca-825d-341faab093cd","Type":"ContainerDied","Data":"462d00ab05f962478580caf26239f41b0660c31dfe63488f39ed91ff2b47ac56"} Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.701489 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jn27h" Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.701545 4981 scope.go:117] "RemoveContainer" containerID="5583d0ac4946c78bc2adf2e2ba66c3174d0694a348cc32d1817375c6339546e1" Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.703031 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536980-lq4dg" event={"ID":"46100661-2b00-441b-a3e6-394279e60051","Type":"ContainerStarted","Data":"1ec792a976db7e4b656154acd87e34d37f61a528243e6e09fe73ec1e9140c159"} Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.707917 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-679b7bfb6c-6ncth" event={"ID":"6619b1c1-dd68-4fa1-b77a-68c456cc557e","Type":"ContainerStarted","Data":"5e7128975ccc9adb4e8320ae4dd9dc5ce328363eca1baa9f25eb24eab28f3289"} Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.718364 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qj99b" event={"ID":"d889c4d8-3c9b-41b4-be84-4026b0967d12","Type":"ContainerStarted","Data":"ba769fb9d9f3efae3bdc4a723c03185e57bbf469563848b10e6110fbec5e96c9"} Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.721425 4981 scope.go:117] "RemoveContainer" containerID="e3cea1e907ad2bb4a81ae5ef0f9aff8365362ff2a01bd38f79f26850d1a61e8b" Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.734706 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536980-lq4dg" podStartSLOduration=1.220681885 podStartE2EDuration="2.734673755s" podCreationTimestamp="2026-02-27 19:00:00 +0000 UTC" firstStartedPulling="2026-02-27 19:00:00.777317093 +0000 UTC m=+900.256098253" lastFinishedPulling="2026-02-27 19:00:02.291308923 +0000 UTC m=+901.770090123" observedRunningTime="2026-02-27 19:00:02.727600469 +0000 UTC m=+902.206381639" watchObservedRunningTime="2026-02-27 19:00:02.734673755 +0000 UTC m=+902.213454925" Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.757164 4981 scope.go:117] "RemoveContainer" containerID="2ce7c10e40b757bd57144915837ab580e109dd8e37399d847908281a514baba9" Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.757266 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qj99b" podStartSLOduration=8.916172416 podStartE2EDuration="13.757254622s" podCreationTimestamp="2026-02-27 18:59:49 +0000 UTC" firstStartedPulling="2026-02-27 18:59:56.581363077 +0000 UTC m=+896.060144277" lastFinishedPulling="2026-02-27 19:00:01.422445323 +0000 UTC m=+900.901226483" observedRunningTime="2026-02-27 19:00:02.755601072 +0000 UTC m=+902.234382232" watchObservedRunningTime="2026-02-27 19:00:02.757254622 +0000 UTC m=+902.236035782" Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.781150 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jn27h"] Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.784410 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jn27h"] Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.785798 4981 scope.go:117] "RemoveContainer" containerID="5583d0ac4946c78bc2adf2e2ba66c3174d0694a348cc32d1817375c6339546e1" Feb 27 19:00:02 crc kubenswrapper[4981]: E0227 19:00:02.786357 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5583d0ac4946c78bc2adf2e2ba66c3174d0694a348cc32d1817375c6339546e1\": container with ID starting with 5583d0ac4946c78bc2adf2e2ba66c3174d0694a348cc32d1817375c6339546e1 not found: ID does not exist" containerID="5583d0ac4946c78bc2adf2e2ba66c3174d0694a348cc32d1817375c6339546e1" Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.786417 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5583d0ac4946c78bc2adf2e2ba66c3174d0694a348cc32d1817375c6339546e1"} err="failed to get container status \"5583d0ac4946c78bc2adf2e2ba66c3174d0694a348cc32d1817375c6339546e1\": rpc error: code = NotFound desc = could not find container \"5583d0ac4946c78bc2adf2e2ba66c3174d0694a348cc32d1817375c6339546e1\": container with ID starting with 5583d0ac4946c78bc2adf2e2ba66c3174d0694a348cc32d1817375c6339546e1 not found: ID does not exist" Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.786457 4981 scope.go:117] "RemoveContainer" containerID="e3cea1e907ad2bb4a81ae5ef0f9aff8365362ff2a01bd38f79f26850d1a61e8b" Feb 27 19:00:02 crc kubenswrapper[4981]: E0227 19:00:02.788138 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3cea1e907ad2bb4a81ae5ef0f9aff8365362ff2a01bd38f79f26850d1a61e8b\": container with ID starting with e3cea1e907ad2bb4a81ae5ef0f9aff8365362ff2a01bd38f79f26850d1a61e8b not found: ID does not exist" containerID="e3cea1e907ad2bb4a81ae5ef0f9aff8365362ff2a01bd38f79f26850d1a61e8b" Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.788182 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3cea1e907ad2bb4a81ae5ef0f9aff8365362ff2a01bd38f79f26850d1a61e8b"} err="failed to get container status \"e3cea1e907ad2bb4a81ae5ef0f9aff8365362ff2a01bd38f79f26850d1a61e8b\": rpc error: code = NotFound desc = could not find container \"e3cea1e907ad2bb4a81ae5ef0f9aff8365362ff2a01bd38f79f26850d1a61e8b\": container with ID starting with e3cea1e907ad2bb4a81ae5ef0f9aff8365362ff2a01bd38f79f26850d1a61e8b not found: ID does not exist" Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.788230 4981 scope.go:117] "RemoveContainer" containerID="2ce7c10e40b757bd57144915837ab580e109dd8e37399d847908281a514baba9" Feb 27 19:00:02 crc kubenswrapper[4981]: E0227 19:00:02.792577 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ce7c10e40b757bd57144915837ab580e109dd8e37399d847908281a514baba9\": container with ID starting with 2ce7c10e40b757bd57144915837ab580e109dd8e37399d847908281a514baba9 not found: ID does not exist" containerID="2ce7c10e40b757bd57144915837ab580e109dd8e37399d847908281a514baba9" Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.792629 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce7c10e40b757bd57144915837ab580e109dd8e37399d847908281a514baba9"} err="failed to get container status \"2ce7c10e40b757bd57144915837ab580e109dd8e37399d847908281a514baba9\": rpc error: code = NotFound desc = could not find container \"2ce7c10e40b757bd57144915837ab580e109dd8e37399d847908281a514baba9\": container with ID starting with 2ce7c10e40b757bd57144915837ab580e109dd8e37399d847908281a514baba9 not found: ID does not exist" Feb 27 19:00:02 crc kubenswrapper[4981]: I0227 19:00:02.982460 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8" Feb 27 19:00:03 crc kubenswrapper[4981]: I0227 19:00:03.152451 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa7d7b4b-cc16-4da4-94d1-4daae958bacf-config-volume\") pod \"aa7d7b4b-cc16-4da4-94d1-4daae958bacf\" (UID: \"aa7d7b4b-cc16-4da4-94d1-4daae958bacf\") " Feb 27 19:00:03 crc kubenswrapper[4981]: I0227 19:00:03.152628 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa7d7b4b-cc16-4da4-94d1-4daae958bacf-secret-volume\") pod \"aa7d7b4b-cc16-4da4-94d1-4daae958bacf\" (UID: \"aa7d7b4b-cc16-4da4-94d1-4daae958bacf\") " Feb 27 19:00:03 crc kubenswrapper[4981]: I0227 19:00:03.152674 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgjbq\" (UniqueName: \"kubernetes.io/projected/aa7d7b4b-cc16-4da4-94d1-4daae958bacf-kube-api-access-qgjbq\") pod \"aa7d7b4b-cc16-4da4-94d1-4daae958bacf\" (UID: \"aa7d7b4b-cc16-4da4-94d1-4daae958bacf\") " Feb 27 19:00:03 crc kubenswrapper[4981]: I0227 19:00:03.153591 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa7d7b4b-cc16-4da4-94d1-4daae958bacf-config-volume" (OuterVolumeSpecName: "config-volume") pod "aa7d7b4b-cc16-4da4-94d1-4daae958bacf" (UID: "aa7d7b4b-cc16-4da4-94d1-4daae958bacf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:00:03 crc kubenswrapper[4981]: I0227 19:00:03.158107 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa7d7b4b-cc16-4da4-94d1-4daae958bacf-kube-api-access-qgjbq" (OuterVolumeSpecName: "kube-api-access-qgjbq") pod "aa7d7b4b-cc16-4da4-94d1-4daae958bacf" (UID: "aa7d7b4b-cc16-4da4-94d1-4daae958bacf"). InnerVolumeSpecName "kube-api-access-qgjbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:00:03 crc kubenswrapper[4981]: I0227 19:00:03.158327 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa7d7b4b-cc16-4da4-94d1-4daae958bacf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aa7d7b4b-cc16-4da4-94d1-4daae958bacf" (UID: "aa7d7b4b-cc16-4da4-94d1-4daae958bacf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:00:03 crc kubenswrapper[4981]: I0227 19:00:03.253863 4981 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa7d7b4b-cc16-4da4-94d1-4daae958bacf-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:03 crc kubenswrapper[4981]: I0227 19:00:03.253921 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgjbq\" (UniqueName: \"kubernetes.io/projected/aa7d7b4b-cc16-4da4-94d1-4daae958bacf-kube-api-access-qgjbq\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:03 crc kubenswrapper[4981]: I0227 19:00:03.253941 4981 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa7d7b4b-cc16-4da4-94d1-4daae958bacf-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:03 crc kubenswrapper[4981]: I0227 19:00:03.643782 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="133ca76d-9978-4cca-825d-341faab093cd" path="/var/lib/kubelet/pods/133ca76d-9978-4cca-825d-341faab093cd/volumes" Feb 27 19:00:03 crc kubenswrapper[4981]: I0227 19:00:03.729908 4981 generic.go:334] "Generic (PLEG): container finished" podID="46100661-2b00-441b-a3e6-394279e60051" containerID="1ec792a976db7e4b656154acd87e34d37f61a528243e6e09fe73ec1e9140c159" exitCode=0 Feb 27 19:00:03 crc kubenswrapper[4981]: I0227 19:00:03.729969 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536980-lq4dg" event={"ID":"46100661-2b00-441b-a3e6-394279e60051","Type":"ContainerDied","Data":"1ec792a976db7e4b656154acd87e34d37f61a528243e6e09fe73ec1e9140c159"} Feb 27 19:00:03 crc kubenswrapper[4981]: I0227 19:00:03.736758 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8" Feb 27 19:00:03 crc kubenswrapper[4981]: I0227 19:00:03.737615 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8" event={"ID":"aa7d7b4b-cc16-4da4-94d1-4daae958bacf","Type":"ContainerDied","Data":"06df6b2153a5c0638b3dff929cd2c8e21ec90ac0290ef402d85bd71b55262a1f"} Feb 27 19:00:03 crc kubenswrapper[4981]: I0227 19:00:03.737996 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06df6b2153a5c0638b3dff929cd2c8e21ec90ac0290ef402d85bd71b55262a1f" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.024245 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536980-lq4dg" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.095910 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p62h6"] Feb 27 19:00:05 crc kubenswrapper[4981]: E0227 19:00:05.096128 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133ca76d-9978-4cca-825d-341faab093cd" containerName="registry-server" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.096142 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="133ca76d-9978-4cca-825d-341faab093cd" containerName="registry-server" Feb 27 19:00:05 crc kubenswrapper[4981]: E0227 19:00:05.096154 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7d7b4b-cc16-4da4-94d1-4daae958bacf" containerName="collect-profiles" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.096160 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7d7b4b-cc16-4da4-94d1-4daae958bacf" containerName="collect-profiles" Feb 27 19:00:05 crc kubenswrapper[4981]: E0227 19:00:05.096168 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133ca76d-9978-4cca-825d-341faab093cd" containerName="extract-content" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.096174 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="133ca76d-9978-4cca-825d-341faab093cd" containerName="extract-content" Feb 27 19:00:05 crc kubenswrapper[4981]: E0227 19:00:05.096180 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="133ca76d-9978-4cca-825d-341faab093cd" containerName="extract-utilities" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.096186 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="133ca76d-9978-4cca-825d-341faab093cd" containerName="extract-utilities" Feb 27 19:00:05 crc kubenswrapper[4981]: E0227 19:00:05.096194 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46100661-2b00-441b-a3e6-394279e60051" containerName="oc" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.096200 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="46100661-2b00-441b-a3e6-394279e60051" containerName="oc" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.096294 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa7d7b4b-cc16-4da4-94d1-4daae958bacf" containerName="collect-profiles" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.096311 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="46100661-2b00-441b-a3e6-394279e60051" containerName="oc" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.096318 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="133ca76d-9978-4cca-825d-341faab093cd" containerName="registry-server" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.097073 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p62h6" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.100126 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p62h6"] Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.192657 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wwrr\" (UniqueName: \"kubernetes.io/projected/46100661-2b00-441b-a3e6-394279e60051-kube-api-access-8wwrr\") pod \"46100661-2b00-441b-a3e6-394279e60051\" (UID: \"46100661-2b00-441b-a3e6-394279e60051\") " Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.202788 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46100661-2b00-441b-a3e6-394279e60051-kube-api-access-8wwrr" (OuterVolumeSpecName: "kube-api-access-8wwrr") pod "46100661-2b00-441b-a3e6-394279e60051" (UID: "46100661-2b00-441b-a3e6-394279e60051"). InnerVolumeSpecName "kube-api-access-8wwrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.293792 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/372fd37b-3c2e-495a-944f-4832079304c6-utilities\") pod \"redhat-marketplace-p62h6\" (UID: \"372fd37b-3c2e-495a-944f-4832079304c6\") " pod="openshift-marketplace/redhat-marketplace-p62h6" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.293879 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/372fd37b-3c2e-495a-944f-4832079304c6-catalog-content\") pod \"redhat-marketplace-p62h6\" (UID: \"372fd37b-3c2e-495a-944f-4832079304c6\") " pod="openshift-marketplace/redhat-marketplace-p62h6" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.293922 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mslq\" (UniqueName: \"kubernetes.io/projected/372fd37b-3c2e-495a-944f-4832079304c6-kube-api-access-2mslq\") pod \"redhat-marketplace-p62h6\" (UID: \"372fd37b-3c2e-495a-944f-4832079304c6\") " pod="openshift-marketplace/redhat-marketplace-p62h6" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.293970 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wwrr\" (UniqueName: \"kubernetes.io/projected/46100661-2b00-441b-a3e6-394279e60051-kube-api-access-8wwrr\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.395255 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/372fd37b-3c2e-495a-944f-4832079304c6-catalog-content\") pod \"redhat-marketplace-p62h6\" (UID: \"372fd37b-3c2e-495a-944f-4832079304c6\") " pod="openshift-marketplace/redhat-marketplace-p62h6" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.395327 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mslq\" (UniqueName: \"kubernetes.io/projected/372fd37b-3c2e-495a-944f-4832079304c6-kube-api-access-2mslq\") pod \"redhat-marketplace-p62h6\" (UID: \"372fd37b-3c2e-495a-944f-4832079304c6\") " pod="openshift-marketplace/redhat-marketplace-p62h6" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.395394 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/372fd37b-3c2e-495a-944f-4832079304c6-utilities\") pod \"redhat-marketplace-p62h6\" (UID: \"372fd37b-3c2e-495a-944f-4832079304c6\") " pod="openshift-marketplace/redhat-marketplace-p62h6" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.396404 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/372fd37b-3c2e-495a-944f-4832079304c6-catalog-content\") pod \"redhat-marketplace-p62h6\" (UID: \"372fd37b-3c2e-495a-944f-4832079304c6\") " pod="openshift-marketplace/redhat-marketplace-p62h6" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.397333 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/372fd37b-3c2e-495a-944f-4832079304c6-utilities\") pod \"redhat-marketplace-p62h6\" (UID: \"372fd37b-3c2e-495a-944f-4832079304c6\") " pod="openshift-marketplace/redhat-marketplace-p62h6" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.414100 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mslq\" (UniqueName: \"kubernetes.io/projected/372fd37b-3c2e-495a-944f-4832079304c6-kube-api-access-2mslq\") pod \"redhat-marketplace-p62h6\" (UID: \"372fd37b-3c2e-495a-944f-4832079304c6\") " pod="openshift-marketplace/redhat-marketplace-p62h6" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.429565 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p62h6" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.639776 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p62h6"] Feb 27 19:00:05 crc kubenswrapper[4981]: W0227 19:00:05.650723 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod372fd37b_3c2e_495a_944f_4832079304c6.slice/crio-bb05551925efa515437eae10dcaf45105361fac0e3f733c74eedf5a707407a4b WatchSource:0}: Error finding container bb05551925efa515437eae10dcaf45105361fac0e3f733c74eedf5a707407a4b: Status 404 returned error can't find the container with id bb05551925efa515437eae10dcaf45105361fac0e3f733c74eedf5a707407a4b Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.766223 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536980-lq4dg" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.766240 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536980-lq4dg" event={"ID":"46100661-2b00-441b-a3e6-394279e60051","Type":"ContainerDied","Data":"d1458402da47bb51f935e805d5b7b556bc829df79b60936862c6d51af42e874a"} Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.766273 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1458402da47bb51f935e805d5b7b556bc829df79b60936862c6d51af42e874a" Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.767237 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p62h6" event={"ID":"372fd37b-3c2e-495a-944f-4832079304c6","Type":"ContainerStarted","Data":"bb05551925efa515437eae10dcaf45105361fac0e3f733c74eedf5a707407a4b"} Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.781616 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536974-gdsvz"] Feb 27 19:00:05 crc kubenswrapper[4981]: I0227 19:00:05.784735 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536974-gdsvz"] Feb 27 19:00:06 crc kubenswrapper[4981]: I0227 19:00:06.783614 4981 generic.go:334] "Generic (PLEG): container finished" podID="372fd37b-3c2e-495a-944f-4832079304c6" containerID="0cb881ceffe84467ab575d257c5e37ae0fe2a7e744748d5d3bb7cada376aa7e7" exitCode=0 Feb 27 19:00:06 crc kubenswrapper[4981]: I0227 19:00:06.783780 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p62h6" event={"ID":"372fd37b-3c2e-495a-944f-4832079304c6","Type":"ContainerDied","Data":"0cb881ceffe84467ab575d257c5e37ae0fe2a7e744748d5d3bb7cada376aa7e7"} Feb 27 19:00:07 crc kubenswrapper[4981]: I0227 19:00:07.637829 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1172e92a-70af-4085-8fae-bc7fb4b3dba6" path="/var/lib/kubelet/pods/1172e92a-70af-4085-8fae-bc7fb4b3dba6/volumes" Feb 27 19:00:07 crc kubenswrapper[4981]: I0227 19:00:07.794355 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4fnww" event={"ID":"bacb7ae1-b5de-400a-9182-850c94bff2ac","Type":"ContainerStarted","Data":"9fbdb6fb152c1505b1398b85c5748646f6e344b7c088c92fa5b20ea696b370c1"} Feb 27 19:00:07 crc kubenswrapper[4981]: I0227 19:00:07.796430 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5g6ww" event={"ID":"252e90b6-bbc9-40ed-a7f6-df2cd8bec420","Type":"ContainerStarted","Data":"d9d3aeb649bd822f2f4670e0c8c2bbe3eaebe02b48cca74ee194ae893cf76fd8"} Feb 27 19:00:07 crc kubenswrapper[4981]: I0227 19:00:07.796939 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-5g6ww" Feb 27 19:00:07 crc kubenswrapper[4981]: I0227 19:00:07.798834 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-s77xl" event={"ID":"59ad1269-77a3-4b47-833b-60da24bbe283","Type":"ContainerStarted","Data":"d5a69c5688ce0d30d475599bc35af83528d47b146230309acb88e1d5a490aa82"} Feb 27 19:00:07 crc kubenswrapper[4981]: I0227 19:00:07.815504 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-4fnww" podStartSLOduration=2.437953351 podStartE2EDuration="7.815491255s" podCreationTimestamp="2026-02-27 19:00:00 +0000 UTC" firstStartedPulling="2026-02-27 19:00:01.556701934 +0000 UTC m=+901.035483104" lastFinishedPulling="2026-02-27 19:00:06.934239808 +0000 UTC m=+906.413021008" observedRunningTime="2026-02-27 19:00:07.814170925 +0000 UTC m=+907.292952105" watchObservedRunningTime="2026-02-27 19:00:07.815491255 +0000 UTC m=+907.294272415" Feb 27 19:00:07 crc kubenswrapper[4981]: I0227 19:00:07.845438 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-5g6ww" podStartSLOduration=2.028553784 podStartE2EDuration="7.845417097s" podCreationTimestamp="2026-02-27 19:00:00 +0000 UTC" firstStartedPulling="2026-02-27 19:00:01.115939382 +0000 UTC m=+900.594720542" lastFinishedPulling="2026-02-27 19:00:06.932802655 +0000 UTC m=+906.411583855" observedRunningTime="2026-02-27 19:00:07.841806847 +0000 UTC m=+907.320588037" watchObservedRunningTime="2026-02-27 19:00:07.845417097 +0000 UTC m=+907.324198257" Feb 27 19:00:08 crc kubenswrapper[4981]: I0227 19:00:08.810249 4981 generic.go:334] "Generic (PLEG): container finished" podID="372fd37b-3c2e-495a-944f-4832079304c6" containerID="6aed585cf07464f6140058e0a53596921ada57435f063416826c08202e2c3b72" exitCode=0 Feb 27 19:00:08 crc kubenswrapper[4981]: I0227 19:00:08.810332 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p62h6" event={"ID":"372fd37b-3c2e-495a-944f-4832079304c6","Type":"ContainerDied","Data":"6aed585cf07464f6140058e0a53596921ada57435f063416826c08202e2c3b72"} Feb 27 19:00:09 crc kubenswrapper[4981]: I0227 19:00:09.817976 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p62h6" event={"ID":"372fd37b-3c2e-495a-944f-4832079304c6","Type":"ContainerStarted","Data":"83ed8dc29d0520f8e3300f91bff4877c50cd569cc629a60393b9a8c170d70621"} Feb 27 19:00:09 crc kubenswrapper[4981]: I0227 19:00:09.845701 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p62h6" podStartSLOduration=2.42554764 podStartE2EDuration="4.845684646s" podCreationTimestamp="2026-02-27 19:00:05 +0000 UTC" firstStartedPulling="2026-02-27 19:00:06.842537404 +0000 UTC m=+906.321318564" lastFinishedPulling="2026-02-27 19:00:09.2626744 +0000 UTC m=+908.741455570" observedRunningTime="2026-02-27 19:00:09.842216422 +0000 UTC m=+909.320997602" watchObservedRunningTime="2026-02-27 19:00:09.845684646 +0000 UTC m=+909.324465796" Feb 27 19:00:10 crc kubenswrapper[4981]: I0227 19:00:10.204465 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qj99b" Feb 27 19:00:10 crc kubenswrapper[4981]: I0227 19:00:10.204851 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qj99b" Feb 27 19:00:10 crc kubenswrapper[4981]: I0227 19:00:10.269465 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qj99b" Feb 27 19:00:10 crc kubenswrapper[4981]: I0227 19:00:10.895011 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qj99b" Feb 27 19:00:11 crc kubenswrapper[4981]: I0227 19:00:11.284360 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qj99b"] Feb 27 19:00:11 crc kubenswrapper[4981]: I0227 19:00:11.312959 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:11 crc kubenswrapper[4981]: I0227 19:00:11.313662 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:11 crc kubenswrapper[4981]: I0227 19:00:11.319795 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:11 crc kubenswrapper[4981]: I0227 19:00:11.834225 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-s77xl" event={"ID":"59ad1269-77a3-4b47-833b-60da24bbe283","Type":"ContainerStarted","Data":"01fedb90e4d75d2a11cf123a9e67b22fe9d5b42f6798a9763948c8fa96d13cdd"} Feb 27 19:00:11 crc kubenswrapper[4981]: I0227 19:00:11.836274 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-hqtml" event={"ID":"fecb7b86-12e2-42ad-af13-8d5b3cbcae05","Type":"ContainerStarted","Data":"235910f7d57e97415465c9c2384ec32dfa65384beb5c6cbcb89bbc00251748c6"} Feb 27 19:00:11 crc kubenswrapper[4981]: I0227 19:00:11.843961 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-679b7bfb6c-6ncth" Feb 27 19:00:11 crc kubenswrapper[4981]: I0227 19:00:11.888398 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-s77xl" podStartSLOduration=2.272203901 podStartE2EDuration="11.88837981s" podCreationTimestamp="2026-02-27 19:00:00 +0000 UTC" firstStartedPulling="2026-02-27 19:00:01.28025492 +0000 UTC m=+900.759036080" lastFinishedPulling="2026-02-27 19:00:10.896430819 +0000 UTC m=+910.375211989" observedRunningTime="2026-02-27 19:00:11.863713878 +0000 UTC m=+911.342495098" watchObservedRunningTime="2026-02-27 19:00:11.88837981 +0000 UTC m=+911.367160990" Feb 27 19:00:11 crc kubenswrapper[4981]: I0227 19:00:11.888920 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-hqtml" podStartSLOduration=1.820519845 podStartE2EDuration="11.888911436s" podCreationTimestamp="2026-02-27 19:00:00 +0000 UTC" firstStartedPulling="2026-02-27 19:00:01.296641459 +0000 UTC m=+900.775422619" lastFinishedPulling="2026-02-27 19:00:11.36503304 +0000 UTC m=+910.843814210" observedRunningTime="2026-02-27 19:00:11.88642186 +0000 UTC m=+911.365203030" watchObservedRunningTime="2026-02-27 19:00:11.888911436 +0000 UTC m=+911.367692606" Feb 27 19:00:11 crc kubenswrapper[4981]: I0227 19:00:11.961325 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-dllzn"] Feb 27 19:00:12 crc kubenswrapper[4981]: I0227 19:00:12.843872 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qj99b" podUID="d889c4d8-3c9b-41b4-be84-4026b0967d12" containerName="registry-server" containerID="cri-o://ba769fb9d9f3efae3bdc4a723c03185e57bbf469563848b10e6110fbec5e96c9" gracePeriod=2 Feb 27 19:00:12 crc kubenswrapper[4981]: I0227 19:00:12.844197 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-hqtml" Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.318076 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qj99b" Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.420732 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d889c4d8-3c9b-41b4-be84-4026b0967d12-catalog-content\") pod \"d889c4d8-3c9b-41b4-be84-4026b0967d12\" (UID: \"d889c4d8-3c9b-41b4-be84-4026b0967d12\") " Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.420777 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d889c4d8-3c9b-41b4-be84-4026b0967d12-utilities\") pod \"d889c4d8-3c9b-41b4-be84-4026b0967d12\" (UID: \"d889c4d8-3c9b-41b4-be84-4026b0967d12\") " Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.420806 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxsn9\" (UniqueName: \"kubernetes.io/projected/d889c4d8-3c9b-41b4-be84-4026b0967d12-kube-api-access-vxsn9\") pod \"d889c4d8-3c9b-41b4-be84-4026b0967d12\" (UID: \"d889c4d8-3c9b-41b4-be84-4026b0967d12\") " Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.423088 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d889c4d8-3c9b-41b4-be84-4026b0967d12-utilities" (OuterVolumeSpecName: "utilities") pod "d889c4d8-3c9b-41b4-be84-4026b0967d12" (UID: "d889c4d8-3c9b-41b4-be84-4026b0967d12"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.427887 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d889c4d8-3c9b-41b4-be84-4026b0967d12-kube-api-access-vxsn9" (OuterVolumeSpecName: "kube-api-access-vxsn9") pod "d889c4d8-3c9b-41b4-be84-4026b0967d12" (UID: "d889c4d8-3c9b-41b4-be84-4026b0967d12"). InnerVolumeSpecName "kube-api-access-vxsn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.468964 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d889c4d8-3c9b-41b4-be84-4026b0967d12-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d889c4d8-3c9b-41b4-be84-4026b0967d12" (UID: "d889c4d8-3c9b-41b4-be84-4026b0967d12"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.522910 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d889c4d8-3c9b-41b4-be84-4026b0967d12-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.522948 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d889c4d8-3c9b-41b4-be84-4026b0967d12-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.522960 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxsn9\" (UniqueName: \"kubernetes.io/projected/d889c4d8-3c9b-41b4-be84-4026b0967d12-kube-api-access-vxsn9\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.853421 4981 generic.go:334] "Generic (PLEG): container finished" podID="d889c4d8-3c9b-41b4-be84-4026b0967d12" containerID="ba769fb9d9f3efae3bdc4a723c03185e57bbf469563848b10e6110fbec5e96c9" exitCode=0 Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.853488 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qj99b" event={"ID":"d889c4d8-3c9b-41b4-be84-4026b0967d12","Type":"ContainerDied","Data":"ba769fb9d9f3efae3bdc4a723c03185e57bbf469563848b10e6110fbec5e96c9"} Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.853554 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qj99b" event={"ID":"d889c4d8-3c9b-41b4-be84-4026b0967d12","Type":"ContainerDied","Data":"ef2efb8acda9dc7f99e94a0e148178979b7d20d26035569d7cace4c3cc4356ec"} Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.853569 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qj99b" Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.853580 4981 scope.go:117] "RemoveContainer" containerID="ba769fb9d9f3efae3bdc4a723c03185e57bbf469563848b10e6110fbec5e96c9" Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.877276 4981 scope.go:117] "RemoveContainer" containerID="cd7ce321211ff43eadf659f5e81416e69679ad16e560bde720341817a0aa0794" Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.887090 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qj99b"] Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.899180 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qj99b"] Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.902347 4981 scope.go:117] "RemoveContainer" containerID="7740ea2d053fc4d46fd9961be74f3c74af86577590daf36dd437578f4f810224" Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.933803 4981 scope.go:117] "RemoveContainer" containerID="ba769fb9d9f3efae3bdc4a723c03185e57bbf469563848b10e6110fbec5e96c9" Feb 27 19:00:13 crc kubenswrapper[4981]: E0227 19:00:13.934356 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba769fb9d9f3efae3bdc4a723c03185e57bbf469563848b10e6110fbec5e96c9\": container with ID starting with ba769fb9d9f3efae3bdc4a723c03185e57bbf469563848b10e6110fbec5e96c9 not found: ID does not exist" containerID="ba769fb9d9f3efae3bdc4a723c03185e57bbf469563848b10e6110fbec5e96c9" Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.934444 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba769fb9d9f3efae3bdc4a723c03185e57bbf469563848b10e6110fbec5e96c9"} err="failed to get container status \"ba769fb9d9f3efae3bdc4a723c03185e57bbf469563848b10e6110fbec5e96c9\": rpc error: code = NotFound desc = could not find container \"ba769fb9d9f3efae3bdc4a723c03185e57bbf469563848b10e6110fbec5e96c9\": container with ID starting with ba769fb9d9f3efae3bdc4a723c03185e57bbf469563848b10e6110fbec5e96c9 not found: ID does not exist" Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.934507 4981 scope.go:117] "RemoveContainer" containerID="cd7ce321211ff43eadf659f5e81416e69679ad16e560bde720341817a0aa0794" Feb 27 19:00:13 crc kubenswrapper[4981]: E0227 19:00:13.935028 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd7ce321211ff43eadf659f5e81416e69679ad16e560bde720341817a0aa0794\": container with ID starting with cd7ce321211ff43eadf659f5e81416e69679ad16e560bde720341817a0aa0794 not found: ID does not exist" containerID="cd7ce321211ff43eadf659f5e81416e69679ad16e560bde720341817a0aa0794" Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.935140 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd7ce321211ff43eadf659f5e81416e69679ad16e560bde720341817a0aa0794"} err="failed to get container status \"cd7ce321211ff43eadf659f5e81416e69679ad16e560bde720341817a0aa0794\": rpc error: code = NotFound desc = could not find container \"cd7ce321211ff43eadf659f5e81416e69679ad16e560bde720341817a0aa0794\": container with ID starting with cd7ce321211ff43eadf659f5e81416e69679ad16e560bde720341817a0aa0794 not found: ID does not exist" Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.935176 4981 scope.go:117] "RemoveContainer" containerID="7740ea2d053fc4d46fd9961be74f3c74af86577590daf36dd437578f4f810224" Feb 27 19:00:13 crc kubenswrapper[4981]: E0227 19:00:13.935618 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7740ea2d053fc4d46fd9961be74f3c74af86577590daf36dd437578f4f810224\": container with ID starting with 7740ea2d053fc4d46fd9961be74f3c74af86577590daf36dd437578f4f810224 not found: ID does not exist" containerID="7740ea2d053fc4d46fd9961be74f3c74af86577590daf36dd437578f4f810224" Feb 27 19:00:13 crc kubenswrapper[4981]: I0227 19:00:13.935669 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7740ea2d053fc4d46fd9961be74f3c74af86577590daf36dd437578f4f810224"} err="failed to get container status \"7740ea2d053fc4d46fd9961be74f3c74af86577590daf36dd437578f4f810224\": rpc error: code = NotFound desc = could not find container \"7740ea2d053fc4d46fd9961be74f3c74af86577590daf36dd437578f4f810224\": container with ID starting with 7740ea2d053fc4d46fd9961be74f3c74af86577590daf36dd437578f4f810224 not found: ID does not exist" Feb 27 19:00:15 crc kubenswrapper[4981]: I0227 19:00:15.429796 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p62h6" Feb 27 19:00:15 crc kubenswrapper[4981]: I0227 19:00:15.430253 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p62h6" Feb 27 19:00:15 crc kubenswrapper[4981]: I0227 19:00:15.583794 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p62h6" Feb 27 19:00:15 crc kubenswrapper[4981]: I0227 19:00:15.907554 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d889c4d8-3c9b-41b4-be84-4026b0967d12" path="/var/lib/kubelet/pods/d889c4d8-3c9b-41b4-be84-4026b0967d12/volumes" Feb 27 19:00:16 crc kubenswrapper[4981]: I0227 19:00:16.107611 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-5g6ww" Feb 27 19:00:16 crc kubenswrapper[4981]: I0227 19:00:16.155481 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p62h6" Feb 27 19:00:16 crc kubenswrapper[4981]: I0227 19:00:16.688927 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p62h6"] Feb 27 19:00:18 crc kubenswrapper[4981]: I0227 19:00:18.134547 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p62h6" podUID="372fd37b-3c2e-495a-944f-4832079304c6" containerName="registry-server" containerID="cri-o://83ed8dc29d0520f8e3300f91bff4877c50cd569cc629a60393b9a8c170d70621" gracePeriod=2 Feb 27 19:00:20 crc kubenswrapper[4981]: I0227 19:00:20.152894 4981 generic.go:334] "Generic (PLEG): container finished" podID="372fd37b-3c2e-495a-944f-4832079304c6" containerID="83ed8dc29d0520f8e3300f91bff4877c50cd569cc629a60393b9a8c170d70621" exitCode=0 Feb 27 19:00:20 crc kubenswrapper[4981]: I0227 19:00:20.153394 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p62h6" event={"ID":"372fd37b-3c2e-495a-944f-4832079304c6","Type":"ContainerDied","Data":"83ed8dc29d0520f8e3300f91bff4877c50cd569cc629a60393b9a8c170d70621"} Feb 27 19:00:20 crc kubenswrapper[4981]: I0227 19:00:20.600843 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p62h6" Feb 27 19:00:20 crc kubenswrapper[4981]: I0227 19:00:20.681347 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/372fd37b-3c2e-495a-944f-4832079304c6-utilities\") pod \"372fd37b-3c2e-495a-944f-4832079304c6\" (UID: \"372fd37b-3c2e-495a-944f-4832079304c6\") " Feb 27 19:00:20 crc kubenswrapper[4981]: I0227 19:00:20.681498 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mslq\" (UniqueName: \"kubernetes.io/projected/372fd37b-3c2e-495a-944f-4832079304c6-kube-api-access-2mslq\") pod \"372fd37b-3c2e-495a-944f-4832079304c6\" (UID: \"372fd37b-3c2e-495a-944f-4832079304c6\") " Feb 27 19:00:20 crc kubenswrapper[4981]: I0227 19:00:20.681551 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/372fd37b-3c2e-495a-944f-4832079304c6-catalog-content\") pod \"372fd37b-3c2e-495a-944f-4832079304c6\" (UID: \"372fd37b-3c2e-495a-944f-4832079304c6\") " Feb 27 19:00:20 crc kubenswrapper[4981]: I0227 19:00:20.682562 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/372fd37b-3c2e-495a-944f-4832079304c6-utilities" (OuterVolumeSpecName: "utilities") pod "372fd37b-3c2e-495a-944f-4832079304c6" (UID: "372fd37b-3c2e-495a-944f-4832079304c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:00:20 crc kubenswrapper[4981]: I0227 19:00:20.689329 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372fd37b-3c2e-495a-944f-4832079304c6-kube-api-access-2mslq" (OuterVolumeSpecName: "kube-api-access-2mslq") pod "372fd37b-3c2e-495a-944f-4832079304c6" (UID: "372fd37b-3c2e-495a-944f-4832079304c6"). InnerVolumeSpecName "kube-api-access-2mslq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:00:20 crc kubenswrapper[4981]: I0227 19:00:20.708286 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/372fd37b-3c2e-495a-944f-4832079304c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "372fd37b-3c2e-495a-944f-4832079304c6" (UID: "372fd37b-3c2e-495a-944f-4832079304c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:00:20 crc kubenswrapper[4981]: I0227 19:00:20.782891 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mslq\" (UniqueName: \"kubernetes.io/projected/372fd37b-3c2e-495a-944f-4832079304c6-kube-api-access-2mslq\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:20 crc kubenswrapper[4981]: I0227 19:00:20.782945 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/372fd37b-3c2e-495a-944f-4832079304c6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:20 crc kubenswrapper[4981]: I0227 19:00:20.782965 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/372fd37b-3c2e-495a-944f-4832079304c6-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:21 crc kubenswrapper[4981]: I0227 19:00:21.068877 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-hqtml" Feb 27 19:00:21 crc kubenswrapper[4981]: I0227 19:00:21.187408 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p62h6" event={"ID":"372fd37b-3c2e-495a-944f-4832079304c6","Type":"ContainerDied","Data":"bb05551925efa515437eae10dcaf45105361fac0e3f733c74eedf5a707407a4b"} Feb 27 19:00:21 crc kubenswrapper[4981]: I0227 19:00:21.187463 4981 scope.go:117] "RemoveContainer" containerID="83ed8dc29d0520f8e3300f91bff4877c50cd569cc629a60393b9a8c170d70621" Feb 27 19:00:21 crc kubenswrapper[4981]: I0227 19:00:21.187590 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p62h6" Feb 27 19:00:21 crc kubenswrapper[4981]: I0227 19:00:21.220598 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p62h6"] Feb 27 19:00:21 crc kubenswrapper[4981]: I0227 19:00:21.220645 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p62h6"] Feb 27 19:00:21 crc kubenswrapper[4981]: I0227 19:00:21.236939 4981 scope.go:117] "RemoveContainer" containerID="6aed585cf07464f6140058e0a53596921ada57435f063416826c08202e2c3b72" Feb 27 19:00:21 crc kubenswrapper[4981]: I0227 19:00:21.250119 4981 scope.go:117] "RemoveContainer" containerID="0cb881ceffe84467ab575d257c5e37ae0fe2a7e744748d5d3bb7cada376aa7e7" Feb 27 19:00:21 crc kubenswrapper[4981]: I0227 19:00:21.634325 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372fd37b-3c2e-495a-944f-4832079304c6" path="/var/lib/kubelet/pods/372fd37b-3c2e-495a-944f-4832079304c6/volumes" Feb 27 19:00:37 crc kubenswrapper[4981]: I0227 19:00:37.021339 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-dllzn" podUID="124fe9f8-0789-4ae2-aa50-6eb0c57f60ea" containerName="console" containerID="cri-o://ba060d7604cc473c5f08b17e16ee67f5943431e977b2cefc1dc5b37a5dca2f27" gracePeriod=15 Feb 27 19:00:37 crc kubenswrapper[4981]: I0227 19:00:37.410803 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dllzn_124fe9f8-0789-4ae2-aa50-6eb0c57f60ea/console/0.log" Feb 27 19:00:37 crc kubenswrapper[4981]: I0227 19:00:37.411209 4981 generic.go:334] "Generic (PLEG): container finished" podID="124fe9f8-0789-4ae2-aa50-6eb0c57f60ea" containerID="ba060d7604cc473c5f08b17e16ee67f5943431e977b2cefc1dc5b37a5dca2f27" exitCode=2 Feb 27 19:00:37 crc kubenswrapper[4981]: I0227 19:00:37.411254 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dllzn" event={"ID":"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea","Type":"ContainerDied","Data":"ba060d7604cc473c5f08b17e16ee67f5943431e977b2cefc1dc5b37a5dca2f27"} Feb 27 19:00:37 crc kubenswrapper[4981]: I0227 19:00:37.499987 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dllzn_124fe9f8-0789-4ae2-aa50-6eb0c57f60ea/console/0.log" Feb 27 19:00:37 crc kubenswrapper[4981]: I0227 19:00:37.500093 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dllzn" Feb 27 19:00:37 crc kubenswrapper[4981]: I0227 19:00:37.526181 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-console-oauth-config\") pod \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " Feb 27 19:00:37 crc kubenswrapper[4981]: I0227 19:00:37.526237 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-trusted-ca-bundle\") pod \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " Feb 27 19:00:37 crc kubenswrapper[4981]: I0227 19:00:37.526288 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-console-serving-cert\") pod \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " Feb 27 19:00:37 crc kubenswrapper[4981]: I0227 19:00:37.526312 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-oauth-serving-cert\") pod \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " Feb 27 19:00:37 crc kubenswrapper[4981]: I0227 19:00:37.526345 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tckjt\" (UniqueName: \"kubernetes.io/projected/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-kube-api-access-tckjt\") pod \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " Feb 27 19:00:37 crc kubenswrapper[4981]: I0227 19:00:37.526365 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-console-config\") pod \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " Feb 27 19:00:37 crc kubenswrapper[4981]: I0227 19:00:37.526405 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-service-ca\") pod \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\" (UID: \"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea\") " Feb 27 19:00:37 crc kubenswrapper[4981]: I0227 19:00:37.527507 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-service-ca" (OuterVolumeSpecName: "service-ca") pod "124fe9f8-0789-4ae2-aa50-6eb0c57f60ea" (UID: "124fe9f8-0789-4ae2-aa50-6eb0c57f60ea"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:00:37 crc kubenswrapper[4981]: I0227 19:00:37.528014 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-console-config" (OuterVolumeSpecName: "console-config") pod "124fe9f8-0789-4ae2-aa50-6eb0c57f60ea" (UID: "124fe9f8-0789-4ae2-aa50-6eb0c57f60ea"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:00:37 crc kubenswrapper[4981]: I0227 19:00:37.528433 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "124fe9f8-0789-4ae2-aa50-6eb0c57f60ea" (UID: "124fe9f8-0789-4ae2-aa50-6eb0c57f60ea"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:00:37 crc kubenswrapper[4981]: I0227 19:00:37.530730 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "124fe9f8-0789-4ae2-aa50-6eb0c57f60ea" (UID: "124fe9f8-0789-4ae2-aa50-6eb0c57f60ea"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:00:37 crc kubenswrapper[4981]: I0227 19:00:37.995813 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "124fe9f8-0789-4ae2-aa50-6eb0c57f60ea" (UID: "124fe9f8-0789-4ae2-aa50-6eb0c57f60ea"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:00:37 crc kubenswrapper[4981]: I0227 19:00:37.997809 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-kube-api-access-tckjt" (OuterVolumeSpecName: "kube-api-access-tckjt") pod "124fe9f8-0789-4ae2-aa50-6eb0c57f60ea" (UID: "124fe9f8-0789-4ae2-aa50-6eb0c57f60ea"). InnerVolumeSpecName "kube-api-access-tckjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:00:38 crc kubenswrapper[4981]: I0227 19:00:38.001813 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "124fe9f8-0789-4ae2-aa50-6eb0c57f60ea" (UID: "124fe9f8-0789-4ae2-aa50-6eb0c57f60ea"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:00:38 crc kubenswrapper[4981]: I0227 19:00:38.004452 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tckjt\" (UniqueName: \"kubernetes.io/projected/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-kube-api-access-tckjt\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:38 crc kubenswrapper[4981]: I0227 19:00:38.004685 4981 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-console-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:38 crc kubenswrapper[4981]: I0227 19:00:38.004743 4981 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:38 crc kubenswrapper[4981]: I0227 19:00:38.004761 4981 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:38 crc kubenswrapper[4981]: I0227 19:00:38.004771 4981 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:38 crc kubenswrapper[4981]: I0227 19:00:38.004780 4981 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:38 crc kubenswrapper[4981]: I0227 19:00:38.106136 4981 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:38 crc kubenswrapper[4981]: I0227 19:00:38.419848 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dllzn_124fe9f8-0789-4ae2-aa50-6eb0c57f60ea/console/0.log" Feb 27 19:00:38 crc kubenswrapper[4981]: I0227 19:00:38.420304 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dllzn" event={"ID":"124fe9f8-0789-4ae2-aa50-6eb0c57f60ea","Type":"ContainerDied","Data":"291c0bd07507e20cc41d357f9174a62410e135697c1c09ac8ee311ce37b434aa"} Feb 27 19:00:38 crc kubenswrapper[4981]: I0227 19:00:38.420353 4981 scope.go:117] "RemoveContainer" containerID="ba060d7604cc473c5f08b17e16ee67f5943431e977b2cefc1dc5b37a5dca2f27" Feb 27 19:00:38 crc kubenswrapper[4981]: I0227 19:00:38.420502 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dllzn" Feb 27 19:00:39 crc kubenswrapper[4981]: I0227 19:00:38.445078 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-dllzn"] Feb 27 19:00:39 crc kubenswrapper[4981]: I0227 19:00:38.451279 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-dllzn"] Feb 27 19:00:39 crc kubenswrapper[4981]: I0227 19:00:39.635926 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="124fe9f8-0789-4ae2-aa50-6eb0c57f60ea" path="/var/lib/kubelet/pods/124fe9f8-0789-4ae2-aa50-6eb0c57f60ea/volumes" Feb 27 19:00:40 crc kubenswrapper[4981]: I0227 19:00:40.962836 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4"] Feb 27 19:00:40 crc kubenswrapper[4981]: E0227 19:00:40.963047 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d889c4d8-3c9b-41b4-be84-4026b0967d12" containerName="registry-server" Feb 27 19:00:40 crc kubenswrapper[4981]: I0227 19:00:40.963074 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="d889c4d8-3c9b-41b4-be84-4026b0967d12" containerName="registry-server" Feb 27 19:00:40 crc kubenswrapper[4981]: E0227 19:00:40.963081 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d889c4d8-3c9b-41b4-be84-4026b0967d12" containerName="extract-content" Feb 27 19:00:40 crc kubenswrapper[4981]: I0227 19:00:40.963087 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="d889c4d8-3c9b-41b4-be84-4026b0967d12" containerName="extract-content" Feb 27 19:00:40 crc kubenswrapper[4981]: E0227 19:00:40.963100 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372fd37b-3c2e-495a-944f-4832079304c6" containerName="registry-server" Feb 27 19:00:40 crc kubenswrapper[4981]: I0227 19:00:40.963107 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="372fd37b-3c2e-495a-944f-4832079304c6" containerName="registry-server" Feb 27 19:00:40 crc kubenswrapper[4981]: E0227 19:00:40.963117 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d889c4d8-3c9b-41b4-be84-4026b0967d12" containerName="extract-utilities" Feb 27 19:00:40 crc kubenswrapper[4981]: I0227 19:00:40.963123 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="d889c4d8-3c9b-41b4-be84-4026b0967d12" containerName="extract-utilities" Feb 27 19:00:40 crc kubenswrapper[4981]: E0227 19:00:40.963132 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124fe9f8-0789-4ae2-aa50-6eb0c57f60ea" containerName="console" Feb 27 19:00:40 crc kubenswrapper[4981]: I0227 19:00:40.963137 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="124fe9f8-0789-4ae2-aa50-6eb0c57f60ea" containerName="console" Feb 27 19:00:40 crc kubenswrapper[4981]: E0227 19:00:40.963146 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372fd37b-3c2e-495a-944f-4832079304c6" containerName="extract-utilities" Feb 27 19:00:40 crc kubenswrapper[4981]: I0227 19:00:40.963152 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="372fd37b-3c2e-495a-944f-4832079304c6" containerName="extract-utilities" Feb 27 19:00:40 crc kubenswrapper[4981]: E0227 19:00:40.963163 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372fd37b-3c2e-495a-944f-4832079304c6" containerName="extract-content" Feb 27 19:00:40 crc kubenswrapper[4981]: I0227 19:00:40.963170 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="372fd37b-3c2e-495a-944f-4832079304c6" containerName="extract-content" Feb 27 19:00:40 crc kubenswrapper[4981]: I0227 19:00:40.963284 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="d889c4d8-3c9b-41b4-be84-4026b0967d12" containerName="registry-server" Feb 27 19:00:40 crc kubenswrapper[4981]: I0227 19:00:40.963293 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="124fe9f8-0789-4ae2-aa50-6eb0c57f60ea" containerName="console" Feb 27 19:00:40 crc kubenswrapper[4981]: I0227 19:00:40.963302 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="372fd37b-3c2e-495a-944f-4832079304c6" containerName="registry-server" Feb 27 19:00:40 crc kubenswrapper[4981]: I0227 19:00:40.963982 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4" Feb 27 19:00:40 crc kubenswrapper[4981]: I0227 19:00:40.966628 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 27 19:00:40 crc kubenswrapper[4981]: I0227 19:00:40.985954 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4"] Feb 27 19:00:41 crc kubenswrapper[4981]: I0227 19:00:41.140091 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chqkx\" (UniqueName: \"kubernetes.io/projected/bc605a22-0ad3-4fee-9082-27d102a048f7-kube-api-access-chqkx\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4\" (UID: \"bc605a22-0ad3-4fee-9082-27d102a048f7\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4" Feb 27 19:00:41 crc kubenswrapper[4981]: I0227 19:00:41.140422 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bc605a22-0ad3-4fee-9082-27d102a048f7-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4\" (UID: \"bc605a22-0ad3-4fee-9082-27d102a048f7\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4" Feb 27 19:00:41 crc kubenswrapper[4981]: I0227 19:00:41.140458 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bc605a22-0ad3-4fee-9082-27d102a048f7-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4\" (UID: \"bc605a22-0ad3-4fee-9082-27d102a048f7\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4" Feb 27 19:00:41 crc kubenswrapper[4981]: I0227 19:00:41.242046 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bc605a22-0ad3-4fee-9082-27d102a048f7-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4\" (UID: \"bc605a22-0ad3-4fee-9082-27d102a048f7\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4" Feb 27 19:00:41 crc kubenswrapper[4981]: I0227 19:00:41.242343 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chqkx\" (UniqueName: \"kubernetes.io/projected/bc605a22-0ad3-4fee-9082-27d102a048f7-kube-api-access-chqkx\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4\" (UID: \"bc605a22-0ad3-4fee-9082-27d102a048f7\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4" Feb 27 19:00:41 crc kubenswrapper[4981]: I0227 19:00:41.242436 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bc605a22-0ad3-4fee-9082-27d102a048f7-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4\" (UID: \"bc605a22-0ad3-4fee-9082-27d102a048f7\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4" Feb 27 19:00:41 crc kubenswrapper[4981]: I0227 19:00:41.242888 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bc605a22-0ad3-4fee-9082-27d102a048f7-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4\" (UID: \"bc605a22-0ad3-4fee-9082-27d102a048f7\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4" Feb 27 19:00:41 crc kubenswrapper[4981]: I0227 19:00:41.242937 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bc605a22-0ad3-4fee-9082-27d102a048f7-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4\" (UID: \"bc605a22-0ad3-4fee-9082-27d102a048f7\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4" Feb 27 19:00:41 crc kubenswrapper[4981]: I0227 19:00:41.266628 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chqkx\" (UniqueName: \"kubernetes.io/projected/bc605a22-0ad3-4fee-9082-27d102a048f7-kube-api-access-chqkx\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4\" (UID: \"bc605a22-0ad3-4fee-9082-27d102a048f7\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4" Feb 27 19:00:41 crc kubenswrapper[4981]: I0227 19:00:41.280705 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4" Feb 27 19:00:42 crc kubenswrapper[4981]: I0227 19:00:42.018176 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4"] Feb 27 19:00:42 crc kubenswrapper[4981]: W0227 19:00:42.026514 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc605a22_0ad3_4fee_9082_27d102a048f7.slice/crio-618176702f71a4486ced49c9384fa954437233faaa54cb0b964b818d9a57d961 WatchSource:0}: Error finding container 618176702f71a4486ced49c9384fa954437233faaa54cb0b964b818d9a57d961: Status 404 returned error can't find the container with id 618176702f71a4486ced49c9384fa954437233faaa54cb0b964b818d9a57d961 Feb 27 19:00:42 crc kubenswrapper[4981]: I0227 19:00:42.944157 4981 generic.go:334] "Generic (PLEG): container finished" podID="bc605a22-0ad3-4fee-9082-27d102a048f7" containerID="3c16d4474224ded706107eeacb5252587d8cbcd5fc60f7931b685ba077b97467" exitCode=0 Feb 27 19:00:42 crc kubenswrapper[4981]: I0227 19:00:42.944258 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4" event={"ID":"bc605a22-0ad3-4fee-9082-27d102a048f7","Type":"ContainerDied","Data":"3c16d4474224ded706107eeacb5252587d8cbcd5fc60f7931b685ba077b97467"} Feb 27 19:00:42 crc kubenswrapper[4981]: I0227 19:00:42.944441 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4" event={"ID":"bc605a22-0ad3-4fee-9082-27d102a048f7","Type":"ContainerStarted","Data":"618176702f71a4486ced49c9384fa954437233faaa54cb0b964b818d9a57d961"} Feb 27 19:00:46 crc kubenswrapper[4981]: I0227 19:00:45.983342 4981 generic.go:334] "Generic (PLEG): container finished" podID="bc605a22-0ad3-4fee-9082-27d102a048f7" containerID="e3868be85db893497427ece880b38743a2f1519206bc7844233487531fbc543a" exitCode=0 Feb 27 19:00:46 crc kubenswrapper[4981]: I0227 19:00:45.984689 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4" event={"ID":"bc605a22-0ad3-4fee-9082-27d102a048f7","Type":"ContainerDied","Data":"e3868be85db893497427ece880b38743a2f1519206bc7844233487531fbc543a"} Feb 27 19:00:47 crc kubenswrapper[4981]: I0227 19:00:47.016364 4981 generic.go:334] "Generic (PLEG): container finished" podID="bc605a22-0ad3-4fee-9082-27d102a048f7" containerID="9184b04080f3ddc475d786686c05a9289daa0df5acf72d2df478143c09499648" exitCode=0 Feb 27 19:00:47 crc kubenswrapper[4981]: I0227 19:00:47.016446 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4" event={"ID":"bc605a22-0ad3-4fee-9082-27d102a048f7","Type":"ContainerDied","Data":"9184b04080f3ddc475d786686c05a9289daa0df5acf72d2df478143c09499648"} Feb 27 19:00:48 crc kubenswrapper[4981]: I0227 19:00:48.394949 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4" Feb 27 19:00:48 crc kubenswrapper[4981]: I0227 19:00:48.505226 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bc605a22-0ad3-4fee-9082-27d102a048f7-bundle\") pod \"bc605a22-0ad3-4fee-9082-27d102a048f7\" (UID: \"bc605a22-0ad3-4fee-9082-27d102a048f7\") " Feb 27 19:00:48 crc kubenswrapper[4981]: I0227 19:00:48.505273 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bc605a22-0ad3-4fee-9082-27d102a048f7-util\") pod \"bc605a22-0ad3-4fee-9082-27d102a048f7\" (UID: \"bc605a22-0ad3-4fee-9082-27d102a048f7\") " Feb 27 19:00:48 crc kubenswrapper[4981]: I0227 19:00:48.505319 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chqkx\" (UniqueName: \"kubernetes.io/projected/bc605a22-0ad3-4fee-9082-27d102a048f7-kube-api-access-chqkx\") pod \"bc605a22-0ad3-4fee-9082-27d102a048f7\" (UID: \"bc605a22-0ad3-4fee-9082-27d102a048f7\") " Feb 27 19:00:48 crc kubenswrapper[4981]: I0227 19:00:48.507099 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc605a22-0ad3-4fee-9082-27d102a048f7-bundle" (OuterVolumeSpecName: "bundle") pod "bc605a22-0ad3-4fee-9082-27d102a048f7" (UID: "bc605a22-0ad3-4fee-9082-27d102a048f7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:00:48 crc kubenswrapper[4981]: I0227 19:00:48.514440 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc605a22-0ad3-4fee-9082-27d102a048f7-kube-api-access-chqkx" (OuterVolumeSpecName: "kube-api-access-chqkx") pod "bc605a22-0ad3-4fee-9082-27d102a048f7" (UID: "bc605a22-0ad3-4fee-9082-27d102a048f7"). InnerVolumeSpecName "kube-api-access-chqkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:00:48 crc kubenswrapper[4981]: I0227 19:00:48.519417 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc605a22-0ad3-4fee-9082-27d102a048f7-util" (OuterVolumeSpecName: "util") pod "bc605a22-0ad3-4fee-9082-27d102a048f7" (UID: "bc605a22-0ad3-4fee-9082-27d102a048f7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:00:48 crc kubenswrapper[4981]: I0227 19:00:48.608248 4981 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bc605a22-0ad3-4fee-9082-27d102a048f7-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:48 crc kubenswrapper[4981]: I0227 19:00:48.608300 4981 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bc605a22-0ad3-4fee-9082-27d102a048f7-util\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:48 crc kubenswrapper[4981]: I0227 19:00:48.608324 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chqkx\" (UniqueName: \"kubernetes.io/projected/bc605a22-0ad3-4fee-9082-27d102a048f7-kube-api-access-chqkx\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:49 crc kubenswrapper[4981]: I0227 19:00:49.319217 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4" event={"ID":"bc605a22-0ad3-4fee-9082-27d102a048f7","Type":"ContainerDied","Data":"618176702f71a4486ced49c9384fa954437233faaa54cb0b964b818d9a57d961"} Feb 27 19:00:49 crc kubenswrapper[4981]: I0227 19:00:49.319558 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="618176702f71a4486ced49c9384fa954437233faaa54cb0b964b818d9a57d961" Feb 27 19:00:49 crc kubenswrapper[4981]: I0227 19:00:49.319316 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4" Feb 27 19:00:50 crc kubenswrapper[4981]: I0227 19:00:50.249192 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:00:50 crc kubenswrapper[4981]: I0227 19:00:50.249350 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:00:59 crc kubenswrapper[4981]: I0227 19:00:59.856064 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-f8c457c64-w9f5p"] Feb 27 19:00:59 crc kubenswrapper[4981]: E0227 19:00:59.856756 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc605a22-0ad3-4fee-9082-27d102a048f7" containerName="util" Feb 27 19:00:59 crc kubenswrapper[4981]: I0227 19:00:59.856770 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc605a22-0ad3-4fee-9082-27d102a048f7" containerName="util" Feb 27 19:00:59 crc kubenswrapper[4981]: E0227 19:00:59.856789 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc605a22-0ad3-4fee-9082-27d102a048f7" containerName="pull" Feb 27 19:00:59 crc kubenswrapper[4981]: I0227 19:00:59.856796 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc605a22-0ad3-4fee-9082-27d102a048f7" containerName="pull" Feb 27 19:00:59 crc kubenswrapper[4981]: E0227 19:00:59.856808 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc605a22-0ad3-4fee-9082-27d102a048f7" containerName="extract" Feb 27 19:00:59 crc kubenswrapper[4981]: I0227 19:00:59.856816 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc605a22-0ad3-4fee-9082-27d102a048f7" containerName="extract" Feb 27 19:00:59 crc kubenswrapper[4981]: I0227 19:00:59.856936 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc605a22-0ad3-4fee-9082-27d102a048f7" containerName="extract" Feb 27 19:00:59 crc kubenswrapper[4981]: I0227 19:00:59.857391 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-f8c457c64-w9f5p" Feb 27 19:00:59 crc kubenswrapper[4981]: I0227 19:00:59.860737 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 27 19:00:59 crc kubenswrapper[4981]: I0227 19:00:59.860818 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 27 19:00:59 crc kubenswrapper[4981]: I0227 19:00:59.861160 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 27 19:00:59 crc kubenswrapper[4981]: I0227 19:00:59.861992 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-gf4fx" Feb 27 19:00:59 crc kubenswrapper[4981]: I0227 19:00:59.862719 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 27 19:00:59 crc kubenswrapper[4981]: I0227 19:00:59.882199 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-f8c457c64-w9f5p"] Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.018415 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c-webhook-cert\") pod \"metallb-operator-controller-manager-f8c457c64-w9f5p\" (UID: \"8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c\") " pod="metallb-system/metallb-operator-controller-manager-f8c457c64-w9f5p" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.018460 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qqkg\" (UniqueName: \"kubernetes.io/projected/8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c-kube-api-access-7qqkg\") pod \"metallb-operator-controller-manager-f8c457c64-w9f5p\" (UID: \"8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c\") " pod="metallb-system/metallb-operator-controller-manager-f8c457c64-w9f5p" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.018500 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c-apiservice-cert\") pod \"metallb-operator-controller-manager-f8c457c64-w9f5p\" (UID: \"8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c\") " pod="metallb-system/metallb-operator-controller-manager-f8c457c64-w9f5p" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.096257 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6dc7b56ff5-n8fqf"] Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.097034 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6dc7b56ff5-n8fqf" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.101065 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.101211 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.105601 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-gtrfv" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.119344 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c-webhook-cert\") pod \"metallb-operator-controller-manager-f8c457c64-w9f5p\" (UID: \"8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c\") " pod="metallb-system/metallb-operator-controller-manager-f8c457c64-w9f5p" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.119403 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qqkg\" (UniqueName: \"kubernetes.io/projected/8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c-kube-api-access-7qqkg\") pod \"metallb-operator-controller-manager-f8c457c64-w9f5p\" (UID: \"8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c\") " pod="metallb-system/metallb-operator-controller-manager-f8c457c64-w9f5p" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.119451 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c-apiservice-cert\") pod \"metallb-operator-controller-manager-f8c457c64-w9f5p\" (UID: \"8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c\") " pod="metallb-system/metallb-operator-controller-manager-f8c457c64-w9f5p" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.120880 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6dc7b56ff5-n8fqf"] Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.131852 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c-apiservice-cert\") pod \"metallb-operator-controller-manager-f8c457c64-w9f5p\" (UID: \"8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c\") " pod="metallb-system/metallb-operator-controller-manager-f8c457c64-w9f5p" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.137784 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qqkg\" (UniqueName: \"kubernetes.io/projected/8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c-kube-api-access-7qqkg\") pod \"metallb-operator-controller-manager-f8c457c64-w9f5p\" (UID: \"8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c\") " pod="metallb-system/metallb-operator-controller-manager-f8c457c64-w9f5p" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.155940 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c-webhook-cert\") pod \"metallb-operator-controller-manager-f8c457c64-w9f5p\" (UID: \"8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c\") " pod="metallb-system/metallb-operator-controller-manager-f8c457c64-w9f5p" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.172024 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-f8c457c64-w9f5p" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.220683 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhsnw\" (UniqueName: \"kubernetes.io/projected/21150d0e-51a3-4f9a-beb7-d4511f4680da-kube-api-access-hhsnw\") pod \"metallb-operator-webhook-server-6dc7b56ff5-n8fqf\" (UID: \"21150d0e-51a3-4f9a-beb7-d4511f4680da\") " pod="metallb-system/metallb-operator-webhook-server-6dc7b56ff5-n8fqf" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.221175 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21150d0e-51a3-4f9a-beb7-d4511f4680da-webhook-cert\") pod \"metallb-operator-webhook-server-6dc7b56ff5-n8fqf\" (UID: \"21150d0e-51a3-4f9a-beb7-d4511f4680da\") " pod="metallb-system/metallb-operator-webhook-server-6dc7b56ff5-n8fqf" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.221215 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21150d0e-51a3-4f9a-beb7-d4511f4680da-apiservice-cert\") pod \"metallb-operator-webhook-server-6dc7b56ff5-n8fqf\" (UID: \"21150d0e-51a3-4f9a-beb7-d4511f4680da\") " pod="metallb-system/metallb-operator-webhook-server-6dc7b56ff5-n8fqf" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.322276 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhsnw\" (UniqueName: \"kubernetes.io/projected/21150d0e-51a3-4f9a-beb7-d4511f4680da-kube-api-access-hhsnw\") pod \"metallb-operator-webhook-server-6dc7b56ff5-n8fqf\" (UID: \"21150d0e-51a3-4f9a-beb7-d4511f4680da\") " pod="metallb-system/metallb-operator-webhook-server-6dc7b56ff5-n8fqf" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.322391 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21150d0e-51a3-4f9a-beb7-d4511f4680da-webhook-cert\") pod \"metallb-operator-webhook-server-6dc7b56ff5-n8fqf\" (UID: \"21150d0e-51a3-4f9a-beb7-d4511f4680da\") " pod="metallb-system/metallb-operator-webhook-server-6dc7b56ff5-n8fqf" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.322432 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21150d0e-51a3-4f9a-beb7-d4511f4680da-apiservice-cert\") pod \"metallb-operator-webhook-server-6dc7b56ff5-n8fqf\" (UID: \"21150d0e-51a3-4f9a-beb7-d4511f4680da\") " pod="metallb-system/metallb-operator-webhook-server-6dc7b56ff5-n8fqf" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.326258 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21150d0e-51a3-4f9a-beb7-d4511f4680da-webhook-cert\") pod \"metallb-operator-webhook-server-6dc7b56ff5-n8fqf\" (UID: \"21150d0e-51a3-4f9a-beb7-d4511f4680da\") " pod="metallb-system/metallb-operator-webhook-server-6dc7b56ff5-n8fqf" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.326518 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21150d0e-51a3-4f9a-beb7-d4511f4680da-apiservice-cert\") pod \"metallb-operator-webhook-server-6dc7b56ff5-n8fqf\" (UID: \"21150d0e-51a3-4f9a-beb7-d4511f4680da\") " pod="metallb-system/metallb-operator-webhook-server-6dc7b56ff5-n8fqf" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.345348 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhsnw\" (UniqueName: \"kubernetes.io/projected/21150d0e-51a3-4f9a-beb7-d4511f4680da-kube-api-access-hhsnw\") pod \"metallb-operator-webhook-server-6dc7b56ff5-n8fqf\" (UID: \"21150d0e-51a3-4f9a-beb7-d4511f4680da\") " pod="metallb-system/metallb-operator-webhook-server-6dc7b56ff5-n8fqf" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.411671 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6dc7b56ff5-n8fqf" Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.428815 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-f8c457c64-w9f5p"] Feb 27 19:01:00 crc kubenswrapper[4981]: W0227 19:01:00.440187 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eb3c962_8cf8_4a0c_ad4f_a2f5c8b6f48c.slice/crio-05d5420ac97e5b09460663b5225ccdc204be95716f3ca53a8f1bf25dbdd2fa84 WatchSource:0}: Error finding container 05d5420ac97e5b09460663b5225ccdc204be95716f3ca53a8f1bf25dbdd2fa84: Status 404 returned error can't find the container with id 05d5420ac97e5b09460663b5225ccdc204be95716f3ca53a8f1bf25dbdd2fa84 Feb 27 19:01:00 crc kubenswrapper[4981]: I0227 19:01:00.635791 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6dc7b56ff5-n8fqf"] Feb 27 19:01:00 crc kubenswrapper[4981]: W0227 19:01:00.645566 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21150d0e_51a3_4f9a_beb7_d4511f4680da.slice/crio-8df59f4ae8d631537339b766d4fd73bd2acce4de1fe564df8deca419c3d0560c WatchSource:0}: Error finding container 8df59f4ae8d631537339b766d4fd73bd2acce4de1fe564df8deca419c3d0560c: Status 404 returned error can't find the container with id 8df59f4ae8d631537339b766d4fd73bd2acce4de1fe564df8deca419c3d0560c Feb 27 19:01:01 crc kubenswrapper[4981]: I0227 19:01:01.397013 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-f8c457c64-w9f5p" event={"ID":"8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c","Type":"ContainerStarted","Data":"05d5420ac97e5b09460663b5225ccdc204be95716f3ca53a8f1bf25dbdd2fa84"} Feb 27 19:01:01 crc kubenswrapper[4981]: I0227 19:01:01.398280 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6dc7b56ff5-n8fqf" event={"ID":"21150d0e-51a3-4f9a-beb7-d4511f4680da","Type":"ContainerStarted","Data":"8df59f4ae8d631537339b766d4fd73bd2acce4de1fe564df8deca419c3d0560c"} Feb 27 19:01:02 crc kubenswrapper[4981]: I0227 19:01:02.705845 4981 scope.go:117] "RemoveContainer" containerID="778eca494f075c4eccaca83613a76eb0e2d323cd1b0b7567006cb80651f9953d" Feb 27 19:01:06 crc kubenswrapper[4981]: I0227 19:01:06.435233 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-f8c457c64-w9f5p" event={"ID":"8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c","Type":"ContainerStarted","Data":"83a7b9b05ec605035476cd35142163febf064184177d940cc7b55a413bd245f9"} Feb 27 19:01:06 crc kubenswrapper[4981]: I0227 19:01:06.435908 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-f8c457c64-w9f5p" Feb 27 19:01:06 crc kubenswrapper[4981]: I0227 19:01:06.438170 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6dc7b56ff5-n8fqf" event={"ID":"21150d0e-51a3-4f9a-beb7-d4511f4680da","Type":"ContainerStarted","Data":"8900b2d7ee6d48dd594a005132706df65d2400f9432fd7b809a34f15bf83e8bb"} Feb 27 19:01:06 crc kubenswrapper[4981]: I0227 19:01:06.438377 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6dc7b56ff5-n8fqf" Feb 27 19:01:06 crc kubenswrapper[4981]: I0227 19:01:06.460930 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-f8c457c64-w9f5p" podStartSLOduration=2.144235279 podStartE2EDuration="7.460909738s" podCreationTimestamp="2026-02-27 19:00:59 +0000 UTC" firstStartedPulling="2026-02-27 19:01:00.450512456 +0000 UTC m=+959.929293616" lastFinishedPulling="2026-02-27 19:01:05.767186915 +0000 UTC m=+965.245968075" observedRunningTime="2026-02-27 19:01:06.460702081 +0000 UTC m=+965.939483241" watchObservedRunningTime="2026-02-27 19:01:06.460909738 +0000 UTC m=+965.939690908" Feb 27 19:01:06 crc kubenswrapper[4981]: I0227 19:01:06.487163 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6dc7b56ff5-n8fqf" podStartSLOduration=1.321635903 podStartE2EDuration="6.487140706s" podCreationTimestamp="2026-02-27 19:01:00 +0000 UTC" firstStartedPulling="2026-02-27 19:01:00.667585791 +0000 UTC m=+960.146366951" lastFinishedPulling="2026-02-27 19:01:05.833090604 +0000 UTC m=+965.311871754" observedRunningTime="2026-02-27 19:01:06.481777823 +0000 UTC m=+965.960558983" watchObservedRunningTime="2026-02-27 19:01:06.487140706 +0000 UTC m=+965.965921896" Feb 27 19:01:20 crc kubenswrapper[4981]: I0227 19:01:20.249013 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:01:20 crc kubenswrapper[4981]: I0227 19:01:20.250213 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:01:20 crc kubenswrapper[4981]: I0227 19:01:20.721347 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6dc7b56ff5-n8fqf" Feb 27 19:01:32 crc kubenswrapper[4981]: I0227 19:01:32.298862 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ttszc"] Feb 27 19:01:32 crc kubenswrapper[4981]: I0227 19:01:32.301257 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ttszc" Feb 27 19:01:32 crc kubenswrapper[4981]: I0227 19:01:32.317023 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ttszc"] Feb 27 19:01:32 crc kubenswrapper[4981]: I0227 19:01:32.498372 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0111811-f744-4142-80b3-c25c79d7a040-catalog-content\") pod \"certified-operators-ttszc\" (UID: \"e0111811-f744-4142-80b3-c25c79d7a040\") " pod="openshift-marketplace/certified-operators-ttszc" Feb 27 19:01:32 crc kubenswrapper[4981]: I0227 19:01:32.498431 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0111811-f744-4142-80b3-c25c79d7a040-utilities\") pod \"certified-operators-ttszc\" (UID: \"e0111811-f744-4142-80b3-c25c79d7a040\") " pod="openshift-marketplace/certified-operators-ttszc" Feb 27 19:01:32 crc kubenswrapper[4981]: I0227 19:01:32.498479 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79k6c\" (UniqueName: \"kubernetes.io/projected/e0111811-f744-4142-80b3-c25c79d7a040-kube-api-access-79k6c\") pod \"certified-operators-ttszc\" (UID: \"e0111811-f744-4142-80b3-c25c79d7a040\") " pod="openshift-marketplace/certified-operators-ttszc" Feb 27 19:01:32 crc kubenswrapper[4981]: I0227 19:01:32.599122 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79k6c\" (UniqueName: \"kubernetes.io/projected/e0111811-f744-4142-80b3-c25c79d7a040-kube-api-access-79k6c\") pod \"certified-operators-ttszc\" (UID: \"e0111811-f744-4142-80b3-c25c79d7a040\") " pod="openshift-marketplace/certified-operators-ttszc" Feb 27 19:01:32 crc kubenswrapper[4981]: I0227 19:01:32.599561 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0111811-f744-4142-80b3-c25c79d7a040-catalog-content\") pod \"certified-operators-ttszc\" (UID: \"e0111811-f744-4142-80b3-c25c79d7a040\") " pod="openshift-marketplace/certified-operators-ttszc" Feb 27 19:01:32 crc kubenswrapper[4981]: I0227 19:01:32.600134 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0111811-f744-4142-80b3-c25c79d7a040-catalog-content\") pod \"certified-operators-ttszc\" (UID: \"e0111811-f744-4142-80b3-c25c79d7a040\") " pod="openshift-marketplace/certified-operators-ttszc" Feb 27 19:01:32 crc kubenswrapper[4981]: I0227 19:01:32.600220 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0111811-f744-4142-80b3-c25c79d7a040-utilities\") pod \"certified-operators-ttszc\" (UID: \"e0111811-f744-4142-80b3-c25c79d7a040\") " pod="openshift-marketplace/certified-operators-ttszc" Feb 27 19:01:32 crc kubenswrapper[4981]: I0227 19:01:32.600519 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0111811-f744-4142-80b3-c25c79d7a040-utilities\") pod \"certified-operators-ttszc\" (UID: \"e0111811-f744-4142-80b3-c25c79d7a040\") " pod="openshift-marketplace/certified-operators-ttszc" Feb 27 19:01:32 crc kubenswrapper[4981]: I0227 19:01:32.624770 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79k6c\" (UniqueName: \"kubernetes.io/projected/e0111811-f744-4142-80b3-c25c79d7a040-kube-api-access-79k6c\") pod \"certified-operators-ttszc\" (UID: \"e0111811-f744-4142-80b3-c25c79d7a040\") " pod="openshift-marketplace/certified-operators-ttszc" Feb 27 19:01:32 crc kubenswrapper[4981]: I0227 19:01:32.921544 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ttszc" Feb 27 19:01:33 crc kubenswrapper[4981]: I0227 19:01:33.359340 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ttszc"] Feb 27 19:01:33 crc kubenswrapper[4981]: I0227 19:01:33.761776 4981 generic.go:334] "Generic (PLEG): container finished" podID="e0111811-f744-4142-80b3-c25c79d7a040" containerID="5e43c28cb9a4e2195eed9627e07cd8cda90adb956131d16e607a365d3e623082" exitCode=0 Feb 27 19:01:33 crc kubenswrapper[4981]: I0227 19:01:33.761826 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttszc" event={"ID":"e0111811-f744-4142-80b3-c25c79d7a040","Type":"ContainerDied","Data":"5e43c28cb9a4e2195eed9627e07cd8cda90adb956131d16e607a365d3e623082"} Feb 27 19:01:33 crc kubenswrapper[4981]: I0227 19:01:33.762241 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttszc" event={"ID":"e0111811-f744-4142-80b3-c25c79d7a040","Type":"ContainerStarted","Data":"e6bada8db544c68edb6eaf55a8ab10156a231780d1691e32643cc13cba9dc2a5"} Feb 27 19:01:35 crc kubenswrapper[4981]: E0227 19:01:35.591922 4981 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0111811_f744_4142_80b3_c25c79d7a040.slice/crio-b838d06b312947bab1de9ab8399bdc2244e12ec532c7c5782c06df333cd76bce.scope\": RecentStats: unable to find data in memory cache]" Feb 27 19:01:35 crc kubenswrapper[4981]: I0227 19:01:35.780992 4981 generic.go:334] "Generic (PLEG): container finished" podID="e0111811-f744-4142-80b3-c25c79d7a040" containerID="b838d06b312947bab1de9ab8399bdc2244e12ec532c7c5782c06df333cd76bce" exitCode=0 Feb 27 19:01:35 crc kubenswrapper[4981]: I0227 19:01:35.781075 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttszc" event={"ID":"e0111811-f744-4142-80b3-c25c79d7a040","Type":"ContainerDied","Data":"b838d06b312947bab1de9ab8399bdc2244e12ec532c7c5782c06df333cd76bce"} Feb 27 19:01:36 crc kubenswrapper[4981]: I0227 19:01:36.791911 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttszc" event={"ID":"e0111811-f744-4142-80b3-c25c79d7a040","Type":"ContainerStarted","Data":"f13945e95c2d7fbfd07c861df4f72c785c39bf37e4c4c77b7c0eff3853a11532"} Feb 27 19:01:36 crc kubenswrapper[4981]: I0227 19:01:36.810690 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ttszc" podStartSLOduration=2.06429295 podStartE2EDuration="4.810668378s" podCreationTimestamp="2026-02-27 19:01:32 +0000 UTC" firstStartedPulling="2026-02-27 19:01:33.763340638 +0000 UTC m=+993.242121838" lastFinishedPulling="2026-02-27 19:01:36.509716096 +0000 UTC m=+995.988497266" observedRunningTime="2026-02-27 19:01:36.807676197 +0000 UTC m=+996.286457367" watchObservedRunningTime="2026-02-27 19:01:36.810668378 +0000 UTC m=+996.289449538" Feb 27 19:01:40 crc kubenswrapper[4981]: I0227 19:01:40.175003 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-f8c457c64-w9f5p" Feb 27 19:01:40 crc kubenswrapper[4981]: I0227 19:01:40.970624 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-jfmbn"] Feb 27 19:01:40 crc kubenswrapper[4981]: I0227 19:01:40.972672 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:40 crc kubenswrapper[4981]: I0227 19:01:40.974883 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-tbn8z" Feb 27 19:01:40 crc kubenswrapper[4981]: I0227 19:01:40.974969 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 27 19:01:40 crc kubenswrapper[4981]: I0227 19:01:40.978600 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 27 19:01:40 crc kubenswrapper[4981]: I0227 19:01:40.980892 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-mp522"] Feb 27 19:01:40 crc kubenswrapper[4981]: I0227 19:01:40.981908 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mp522" Feb 27 19:01:40 crc kubenswrapper[4981]: I0227 19:01:40.983671 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 27 19:01:40 crc kubenswrapper[4981]: I0227 19:01:40.997906 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-mp522"] Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.019158 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2c01995-bfbd-4e83-bc8e-e476d7d32a4b-cert\") pod \"frr-k8s-webhook-server-7f989f654f-mp522\" (UID: \"f2c01995-bfbd-4e83-bc8e-e476d7d32a4b\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mp522" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.019479 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/24537f79-2aa5-4ba1-afc0-e91183569040-frr-sockets\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.019514 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49x6r\" (UniqueName: \"kubernetes.io/projected/f2c01995-bfbd-4e83-bc8e-e476d7d32a4b-kube-api-access-49x6r\") pod \"frr-k8s-webhook-server-7f989f654f-mp522\" (UID: \"f2c01995-bfbd-4e83-bc8e-e476d7d32a4b\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mp522" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.019556 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/24537f79-2aa5-4ba1-afc0-e91183569040-frr-conf\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.019600 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/24537f79-2aa5-4ba1-afc0-e91183569040-reloader\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.019638 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/24537f79-2aa5-4ba1-afc0-e91183569040-frr-startup\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.019663 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24537f79-2aa5-4ba1-afc0-e91183569040-metrics-certs\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.019691 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwbnk\" (UniqueName: \"kubernetes.io/projected/24537f79-2aa5-4ba1-afc0-e91183569040-kube-api-access-dwbnk\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.019842 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/24537f79-2aa5-4ba1-afc0-e91183569040-metrics\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.073368 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-kzbf5"] Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.074137 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kzbf5" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.078579 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.078626 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.078909 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.081911 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-q6vtm" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.093381 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-rwrfs"] Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.094216 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-rwrfs" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.098359 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.107838 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-rwrfs"] Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.121933 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/24537f79-2aa5-4ba1-afc0-e91183569040-reloader\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.121971 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/24537f79-2aa5-4ba1-afc0-e91183569040-frr-startup\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.121998 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24537f79-2aa5-4ba1-afc0-e91183569040-metrics-certs\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.122021 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwbnk\" (UniqueName: \"kubernetes.io/projected/24537f79-2aa5-4ba1-afc0-e91183569040-kube-api-access-dwbnk\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.122236 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/24537f79-2aa5-4ba1-afc0-e91183569040-metrics\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.122274 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2c01995-bfbd-4e83-bc8e-e476d7d32a4b-cert\") pod \"frr-k8s-webhook-server-7f989f654f-mp522\" (UID: \"f2c01995-bfbd-4e83-bc8e-e476d7d32a4b\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mp522" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.122300 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/24537f79-2aa5-4ba1-afc0-e91183569040-frr-sockets\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.122324 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49x6r\" (UniqueName: \"kubernetes.io/projected/f2c01995-bfbd-4e83-bc8e-e476d7d32a4b-kube-api-access-49x6r\") pod \"frr-k8s-webhook-server-7f989f654f-mp522\" (UID: \"f2c01995-bfbd-4e83-bc8e-e476d7d32a4b\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mp522" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.122378 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/24537f79-2aa5-4ba1-afc0-e91183569040-frr-conf\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.122900 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/24537f79-2aa5-4ba1-afc0-e91183569040-reloader\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.123228 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/24537f79-2aa5-4ba1-afc0-e91183569040-frr-sockets\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.123460 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/24537f79-2aa5-4ba1-afc0-e91183569040-metrics\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.123671 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/24537f79-2aa5-4ba1-afc0-e91183569040-frr-conf\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: E0227 19:01:41.123726 4981 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 27 19:01:41 crc kubenswrapper[4981]: E0227 19:01:41.123796 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24537f79-2aa5-4ba1-afc0-e91183569040-metrics-certs podName:24537f79-2aa5-4ba1-afc0-e91183569040 nodeName:}" failed. No retries permitted until 2026-02-27 19:01:41.623775383 +0000 UTC m=+1001.102556543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24537f79-2aa5-4ba1-afc0-e91183569040-metrics-certs") pod "frr-k8s-jfmbn" (UID: "24537f79-2aa5-4ba1-afc0-e91183569040") : secret "frr-k8s-certs-secret" not found Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.124146 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/24537f79-2aa5-4ba1-afc0-e91183569040-frr-startup\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.135235 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f2c01995-bfbd-4e83-bc8e-e476d7d32a4b-cert\") pod \"frr-k8s-webhook-server-7f989f654f-mp522\" (UID: \"f2c01995-bfbd-4e83-bc8e-e476d7d32a4b\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mp522" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.154602 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwbnk\" (UniqueName: \"kubernetes.io/projected/24537f79-2aa5-4ba1-afc0-e91183569040-kube-api-access-dwbnk\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.154870 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49x6r\" (UniqueName: \"kubernetes.io/projected/f2c01995-bfbd-4e83-bc8e-e476d7d32a4b-kube-api-access-49x6r\") pod \"frr-k8s-webhook-server-7f989f654f-mp522\" (UID: \"f2c01995-bfbd-4e83-bc8e-e476d7d32a4b\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mp522" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.223874 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgkb4\" (UniqueName: \"kubernetes.io/projected/373faaaa-18eb-4e83-80f8-7828aea58a3a-kube-api-access-fgkb4\") pod \"controller-86ddb6bd46-rwrfs\" (UID: \"373faaaa-18eb-4e83-80f8-7828aea58a3a\") " pod="metallb-system/controller-86ddb6bd46-rwrfs" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.223934 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2xr5\" (UniqueName: \"kubernetes.io/projected/a5fc2773-7650-4e03-9c68-6cbdab555ae0-kube-api-access-h2xr5\") pod \"speaker-kzbf5\" (UID: \"a5fc2773-7650-4e03-9c68-6cbdab555ae0\") " pod="metallb-system/speaker-kzbf5" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.223959 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a5fc2773-7650-4e03-9c68-6cbdab555ae0-memberlist\") pod \"speaker-kzbf5\" (UID: \"a5fc2773-7650-4e03-9c68-6cbdab555ae0\") " pod="metallb-system/speaker-kzbf5" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.224401 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/373faaaa-18eb-4e83-80f8-7828aea58a3a-cert\") pod \"controller-86ddb6bd46-rwrfs\" (UID: \"373faaaa-18eb-4e83-80f8-7828aea58a3a\") " pod="metallb-system/controller-86ddb6bd46-rwrfs" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.224447 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/373faaaa-18eb-4e83-80f8-7828aea58a3a-metrics-certs\") pod \"controller-86ddb6bd46-rwrfs\" (UID: \"373faaaa-18eb-4e83-80f8-7828aea58a3a\") " pod="metallb-system/controller-86ddb6bd46-rwrfs" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.224474 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5fc2773-7650-4e03-9c68-6cbdab555ae0-metrics-certs\") pod \"speaker-kzbf5\" (UID: \"a5fc2773-7650-4e03-9c68-6cbdab555ae0\") " pod="metallb-system/speaker-kzbf5" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.224628 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a5fc2773-7650-4e03-9c68-6cbdab555ae0-metallb-excludel2\") pod \"speaker-kzbf5\" (UID: \"a5fc2773-7650-4e03-9c68-6cbdab555ae0\") " pod="metallb-system/speaker-kzbf5" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.303113 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mp522" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.326044 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgkb4\" (UniqueName: \"kubernetes.io/projected/373faaaa-18eb-4e83-80f8-7828aea58a3a-kube-api-access-fgkb4\") pod \"controller-86ddb6bd46-rwrfs\" (UID: \"373faaaa-18eb-4e83-80f8-7828aea58a3a\") " pod="metallb-system/controller-86ddb6bd46-rwrfs" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.326117 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2xr5\" (UniqueName: \"kubernetes.io/projected/a5fc2773-7650-4e03-9c68-6cbdab555ae0-kube-api-access-h2xr5\") pod \"speaker-kzbf5\" (UID: \"a5fc2773-7650-4e03-9c68-6cbdab555ae0\") " pod="metallb-system/speaker-kzbf5" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.326140 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a5fc2773-7650-4e03-9c68-6cbdab555ae0-memberlist\") pod \"speaker-kzbf5\" (UID: \"a5fc2773-7650-4e03-9c68-6cbdab555ae0\") " pod="metallb-system/speaker-kzbf5" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.326161 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/373faaaa-18eb-4e83-80f8-7828aea58a3a-cert\") pod \"controller-86ddb6bd46-rwrfs\" (UID: \"373faaaa-18eb-4e83-80f8-7828aea58a3a\") " pod="metallb-system/controller-86ddb6bd46-rwrfs" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.326175 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/373faaaa-18eb-4e83-80f8-7828aea58a3a-metrics-certs\") pod \"controller-86ddb6bd46-rwrfs\" (UID: \"373faaaa-18eb-4e83-80f8-7828aea58a3a\") " pod="metallb-system/controller-86ddb6bd46-rwrfs" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.326199 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5fc2773-7650-4e03-9c68-6cbdab555ae0-metrics-certs\") pod \"speaker-kzbf5\" (UID: \"a5fc2773-7650-4e03-9c68-6cbdab555ae0\") " pod="metallb-system/speaker-kzbf5" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.326242 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a5fc2773-7650-4e03-9c68-6cbdab555ae0-metallb-excludel2\") pod \"speaker-kzbf5\" (UID: \"a5fc2773-7650-4e03-9c68-6cbdab555ae0\") " pod="metallb-system/speaker-kzbf5" Feb 27 19:01:41 crc kubenswrapper[4981]: E0227 19:01:41.326282 4981 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 27 19:01:41 crc kubenswrapper[4981]: E0227 19:01:41.326350 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5fc2773-7650-4e03-9c68-6cbdab555ae0-memberlist podName:a5fc2773-7650-4e03-9c68-6cbdab555ae0 nodeName:}" failed. No retries permitted until 2026-02-27 19:01:41.826331256 +0000 UTC m=+1001.305112416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a5fc2773-7650-4e03-9c68-6cbdab555ae0-memberlist") pod "speaker-kzbf5" (UID: "a5fc2773-7650-4e03-9c68-6cbdab555ae0") : secret "metallb-memberlist" not found Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.326816 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a5fc2773-7650-4e03-9c68-6cbdab555ae0-metallb-excludel2\") pod \"speaker-kzbf5\" (UID: \"a5fc2773-7650-4e03-9c68-6cbdab555ae0\") " pod="metallb-system/speaker-kzbf5" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.328634 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.331288 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/373faaaa-18eb-4e83-80f8-7828aea58a3a-metrics-certs\") pod \"controller-86ddb6bd46-rwrfs\" (UID: \"373faaaa-18eb-4e83-80f8-7828aea58a3a\") " pod="metallb-system/controller-86ddb6bd46-rwrfs" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.331437 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5fc2773-7650-4e03-9c68-6cbdab555ae0-metrics-certs\") pod \"speaker-kzbf5\" (UID: \"a5fc2773-7650-4e03-9c68-6cbdab555ae0\") " pod="metallb-system/speaker-kzbf5" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.343091 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2xr5\" (UniqueName: \"kubernetes.io/projected/a5fc2773-7650-4e03-9c68-6cbdab555ae0-kube-api-access-h2xr5\") pod \"speaker-kzbf5\" (UID: \"a5fc2773-7650-4e03-9c68-6cbdab555ae0\") " pod="metallb-system/speaker-kzbf5" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.344351 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/373faaaa-18eb-4e83-80f8-7828aea58a3a-cert\") pod \"controller-86ddb6bd46-rwrfs\" (UID: \"373faaaa-18eb-4e83-80f8-7828aea58a3a\") " pod="metallb-system/controller-86ddb6bd46-rwrfs" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.344699 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgkb4\" (UniqueName: \"kubernetes.io/projected/373faaaa-18eb-4e83-80f8-7828aea58a3a-kube-api-access-fgkb4\") pod \"controller-86ddb6bd46-rwrfs\" (UID: \"373faaaa-18eb-4e83-80f8-7828aea58a3a\") " pod="metallb-system/controller-86ddb6bd46-rwrfs" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.411087 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-rwrfs" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.509992 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-mp522"] Feb 27 19:01:41 crc kubenswrapper[4981]: W0227 19:01:41.513944 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2c01995_bfbd_4e83_bc8e_e476d7d32a4b.slice/crio-693eb141ba18b30ab660db93910a7859bb40652857794cf81286f817f472d22f WatchSource:0}: Error finding container 693eb141ba18b30ab660db93910a7859bb40652857794cf81286f817f472d22f: Status 404 returned error can't find the container with id 693eb141ba18b30ab660db93910a7859bb40652857794cf81286f817f472d22f Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.586453 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-rwrfs"] Feb 27 19:01:41 crc kubenswrapper[4981]: W0227 19:01:41.591164 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod373faaaa_18eb_4e83_80f8_7828aea58a3a.slice/crio-844fb073c8874f1659219adf08afe0c80b0b8e066ee657916eccf10d806989be WatchSource:0}: Error finding container 844fb073c8874f1659219adf08afe0c80b0b8e066ee657916eccf10d806989be: Status 404 returned error can't find the container with id 844fb073c8874f1659219adf08afe0c80b0b8e066ee657916eccf10d806989be Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.631200 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24537f79-2aa5-4ba1-afc0-e91183569040-metrics-certs\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.637123 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24537f79-2aa5-4ba1-afc0-e91183569040-metrics-certs\") pod \"frr-k8s-jfmbn\" (UID: \"24537f79-2aa5-4ba1-afc0-e91183569040\") " pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.834522 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a5fc2773-7650-4e03-9c68-6cbdab555ae0-memberlist\") pod \"speaker-kzbf5\" (UID: \"a5fc2773-7650-4e03-9c68-6cbdab555ae0\") " pod="metallb-system/speaker-kzbf5" Feb 27 19:01:41 crc kubenswrapper[4981]: E0227 19:01:41.834725 4981 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 27 19:01:41 crc kubenswrapper[4981]: E0227 19:01:41.834842 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5fc2773-7650-4e03-9c68-6cbdab555ae0-memberlist podName:a5fc2773-7650-4e03-9c68-6cbdab555ae0 nodeName:}" failed. No retries permitted until 2026-02-27 19:01:42.834813882 +0000 UTC m=+1002.313595082 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a5fc2773-7650-4e03-9c68-6cbdab555ae0-memberlist") pod "speaker-kzbf5" (UID: "a5fc2773-7650-4e03-9c68-6cbdab555ae0") : secret "metallb-memberlist" not found Feb 27 19:01:41 crc kubenswrapper[4981]: I0227 19:01:41.890388 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:42 crc kubenswrapper[4981]: I0227 19:01:42.118417 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-rwrfs" event={"ID":"373faaaa-18eb-4e83-80f8-7828aea58a3a","Type":"ContainerStarted","Data":"f8489fbf990e8ce94adc28cae369c6eb459213ca83e95b11dae2a38198fb81b6"} Feb 27 19:01:42 crc kubenswrapper[4981]: I0227 19:01:42.118798 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-rwrfs" event={"ID":"373faaaa-18eb-4e83-80f8-7828aea58a3a","Type":"ContainerStarted","Data":"844fb073c8874f1659219adf08afe0c80b0b8e066ee657916eccf10d806989be"} Feb 27 19:01:42 crc kubenswrapper[4981]: I0227 19:01:42.119667 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mp522" event={"ID":"f2c01995-bfbd-4e83-bc8e-e476d7d32a4b","Type":"ContainerStarted","Data":"693eb141ba18b30ab660db93910a7859bb40652857794cf81286f817f472d22f"} Feb 27 19:01:42 crc kubenswrapper[4981]: I0227 19:01:42.849627 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a5fc2773-7650-4e03-9c68-6cbdab555ae0-memberlist\") pod \"speaker-kzbf5\" (UID: \"a5fc2773-7650-4e03-9c68-6cbdab555ae0\") " pod="metallb-system/speaker-kzbf5" Feb 27 19:01:42 crc kubenswrapper[4981]: I0227 19:01:42.856495 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a5fc2773-7650-4e03-9c68-6cbdab555ae0-memberlist\") pod \"speaker-kzbf5\" (UID: \"a5fc2773-7650-4e03-9c68-6cbdab555ae0\") " pod="metallb-system/speaker-kzbf5" Feb 27 19:01:42 crc kubenswrapper[4981]: I0227 19:01:42.891777 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kzbf5" Feb 27 19:01:42 crc kubenswrapper[4981]: I0227 19:01:42.922123 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ttszc" Feb 27 19:01:42 crc kubenswrapper[4981]: I0227 19:01:42.923266 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ttszc" Feb 27 19:01:42 crc kubenswrapper[4981]: W0227 19:01:42.923790 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5fc2773_7650_4e03_9c68_6cbdab555ae0.slice/crio-992cb0a3bd42bfddd3f69c567b8141a2e0aef710156ac9b01830547cf6c31ef8 WatchSource:0}: Error finding container 992cb0a3bd42bfddd3f69c567b8141a2e0aef710156ac9b01830547cf6c31ef8: Status 404 returned error can't find the container with id 992cb0a3bd42bfddd3f69c567b8141a2e0aef710156ac9b01830547cf6c31ef8 Feb 27 19:01:42 crc kubenswrapper[4981]: I0227 19:01:42.990166 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ttszc" Feb 27 19:01:43 crc kubenswrapper[4981]: I0227 19:01:43.127662 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kzbf5" event={"ID":"a5fc2773-7650-4e03-9c68-6cbdab555ae0","Type":"ContainerStarted","Data":"992cb0a3bd42bfddd3f69c567b8141a2e0aef710156ac9b01830547cf6c31ef8"} Feb 27 19:01:43 crc kubenswrapper[4981]: I0227 19:01:43.135251 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jfmbn" event={"ID":"24537f79-2aa5-4ba1-afc0-e91183569040","Type":"ContainerStarted","Data":"1cd2b0fcce5841b7b97f648f9e9a0527d2787a4f9450b29039c0ea082fe488d9"} Feb 27 19:01:43 crc kubenswrapper[4981]: I0227 19:01:43.137146 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-rwrfs" event={"ID":"373faaaa-18eb-4e83-80f8-7828aea58a3a","Type":"ContainerStarted","Data":"c07e775a0eebf71660fcc680e68f4c09ff82dc4fc8f97642703d1ade440374f0"} Feb 27 19:01:43 crc kubenswrapper[4981]: I0227 19:01:43.137576 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-rwrfs" Feb 27 19:01:43 crc kubenswrapper[4981]: I0227 19:01:43.155712 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-rwrfs" podStartSLOduration=2.155696276 podStartE2EDuration="2.155696276s" podCreationTimestamp="2026-02-27 19:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:01:43.153724297 +0000 UTC m=+1002.632505477" watchObservedRunningTime="2026-02-27 19:01:43.155696276 +0000 UTC m=+1002.634477436" Feb 27 19:01:43 crc kubenswrapper[4981]: I0227 19:01:43.192787 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ttszc" Feb 27 19:01:43 crc kubenswrapper[4981]: I0227 19:01:43.242518 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ttszc"] Feb 27 19:01:44 crc kubenswrapper[4981]: I0227 19:01:44.147257 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kzbf5" event={"ID":"a5fc2773-7650-4e03-9c68-6cbdab555ae0","Type":"ContainerStarted","Data":"694d85d71ff8e2a72a1bb6662a41d66954a62e421d65c4dbbb631e8f94fd215b"} Feb 27 19:01:44 crc kubenswrapper[4981]: I0227 19:01:44.147595 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kzbf5" event={"ID":"a5fc2773-7650-4e03-9c68-6cbdab555ae0","Type":"ContainerStarted","Data":"1b7116398483944e5391c38ecba53046fabe621b5ab216eec76cc0d097efb661"} Feb 27 19:01:44 crc kubenswrapper[4981]: I0227 19:01:44.147826 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-kzbf5" Feb 27 19:01:44 crc kubenswrapper[4981]: I0227 19:01:44.166455 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-kzbf5" podStartSLOduration=3.166415659 podStartE2EDuration="3.166415659s" podCreationTimestamp="2026-02-27 19:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:01:44.165208202 +0000 UTC m=+1003.643989362" watchObservedRunningTime="2026-02-27 19:01:44.166415659 +0000 UTC m=+1003.645196839" Feb 27 19:01:45 crc kubenswrapper[4981]: I0227 19:01:45.154556 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ttszc" podUID="e0111811-f744-4142-80b3-c25c79d7a040" containerName="registry-server" containerID="cri-o://f13945e95c2d7fbfd07c861df4f72c785c39bf37e4c4c77b7c0eff3853a11532" gracePeriod=2 Feb 27 19:01:45 crc kubenswrapper[4981]: I0227 19:01:45.530850 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ttszc" Feb 27 19:01:45 crc kubenswrapper[4981]: I0227 19:01:45.694780 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0111811-f744-4142-80b3-c25c79d7a040-utilities\") pod \"e0111811-f744-4142-80b3-c25c79d7a040\" (UID: \"e0111811-f744-4142-80b3-c25c79d7a040\") " Feb 27 19:01:45 crc kubenswrapper[4981]: I0227 19:01:45.695048 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79k6c\" (UniqueName: \"kubernetes.io/projected/e0111811-f744-4142-80b3-c25c79d7a040-kube-api-access-79k6c\") pod \"e0111811-f744-4142-80b3-c25c79d7a040\" (UID: \"e0111811-f744-4142-80b3-c25c79d7a040\") " Feb 27 19:01:45 crc kubenswrapper[4981]: I0227 19:01:45.695159 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0111811-f744-4142-80b3-c25c79d7a040-catalog-content\") pod \"e0111811-f744-4142-80b3-c25c79d7a040\" (UID: \"e0111811-f744-4142-80b3-c25c79d7a040\") " Feb 27 19:01:45 crc kubenswrapper[4981]: I0227 19:01:45.706203 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0111811-f744-4142-80b3-c25c79d7a040-utilities" (OuterVolumeSpecName: "utilities") pod "e0111811-f744-4142-80b3-c25c79d7a040" (UID: "e0111811-f744-4142-80b3-c25c79d7a040"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:01:45 crc kubenswrapper[4981]: I0227 19:01:45.716145 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0111811-f744-4142-80b3-c25c79d7a040-kube-api-access-79k6c" (OuterVolumeSpecName: "kube-api-access-79k6c") pod "e0111811-f744-4142-80b3-c25c79d7a040" (UID: "e0111811-f744-4142-80b3-c25c79d7a040"). InnerVolumeSpecName "kube-api-access-79k6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:01:45 crc kubenswrapper[4981]: I0227 19:01:45.784951 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0111811-f744-4142-80b3-c25c79d7a040-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0111811-f744-4142-80b3-c25c79d7a040" (UID: "e0111811-f744-4142-80b3-c25c79d7a040"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:01:45 crc kubenswrapper[4981]: I0227 19:01:45.796946 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0111811-f744-4142-80b3-c25c79d7a040-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:01:45 crc kubenswrapper[4981]: I0227 19:01:45.796984 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79k6c\" (UniqueName: \"kubernetes.io/projected/e0111811-f744-4142-80b3-c25c79d7a040-kube-api-access-79k6c\") on node \"crc\" DevicePath \"\"" Feb 27 19:01:45 crc kubenswrapper[4981]: I0227 19:01:45.796998 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0111811-f744-4142-80b3-c25c79d7a040-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:01:46 crc kubenswrapper[4981]: I0227 19:01:46.165617 4981 generic.go:334] "Generic (PLEG): container finished" podID="e0111811-f744-4142-80b3-c25c79d7a040" containerID="f13945e95c2d7fbfd07c861df4f72c785c39bf37e4c4c77b7c0eff3853a11532" exitCode=0 Feb 27 19:01:46 crc kubenswrapper[4981]: I0227 19:01:46.165720 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ttszc" Feb 27 19:01:46 crc kubenswrapper[4981]: I0227 19:01:46.165771 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttszc" event={"ID":"e0111811-f744-4142-80b3-c25c79d7a040","Type":"ContainerDied","Data":"f13945e95c2d7fbfd07c861df4f72c785c39bf37e4c4c77b7c0eff3853a11532"} Feb 27 19:01:46 crc kubenswrapper[4981]: I0227 19:01:46.165805 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttszc" event={"ID":"e0111811-f744-4142-80b3-c25c79d7a040","Type":"ContainerDied","Data":"e6bada8db544c68edb6eaf55a8ab10156a231780d1691e32643cc13cba9dc2a5"} Feb 27 19:01:46 crc kubenswrapper[4981]: I0227 19:01:46.165828 4981 scope.go:117] "RemoveContainer" containerID="f13945e95c2d7fbfd07c861df4f72c785c39bf37e4c4c77b7c0eff3853a11532" Feb 27 19:01:46 crc kubenswrapper[4981]: I0227 19:01:46.184890 4981 scope.go:117] "RemoveContainer" containerID="b838d06b312947bab1de9ab8399bdc2244e12ec532c7c5782c06df333cd76bce" Feb 27 19:01:46 crc kubenswrapper[4981]: I0227 19:01:46.196217 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ttszc"] Feb 27 19:01:46 crc kubenswrapper[4981]: I0227 19:01:46.204475 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ttszc"] Feb 27 19:01:46 crc kubenswrapper[4981]: I0227 19:01:46.213615 4981 scope.go:117] "RemoveContainer" containerID="5e43c28cb9a4e2195eed9627e07cd8cda90adb956131d16e607a365d3e623082" Feb 27 19:01:46 crc kubenswrapper[4981]: I0227 19:01:46.230803 4981 scope.go:117] "RemoveContainer" containerID="f13945e95c2d7fbfd07c861df4f72c785c39bf37e4c4c77b7c0eff3853a11532" Feb 27 19:01:46 crc kubenswrapper[4981]: E0227 19:01:46.231320 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f13945e95c2d7fbfd07c861df4f72c785c39bf37e4c4c77b7c0eff3853a11532\": container with ID starting with f13945e95c2d7fbfd07c861df4f72c785c39bf37e4c4c77b7c0eff3853a11532 not found: ID does not exist" containerID="f13945e95c2d7fbfd07c861df4f72c785c39bf37e4c4c77b7c0eff3853a11532" Feb 27 19:01:46 crc kubenswrapper[4981]: I0227 19:01:46.231359 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f13945e95c2d7fbfd07c861df4f72c785c39bf37e4c4c77b7c0eff3853a11532"} err="failed to get container status \"f13945e95c2d7fbfd07c861df4f72c785c39bf37e4c4c77b7c0eff3853a11532\": rpc error: code = NotFound desc = could not find container \"f13945e95c2d7fbfd07c861df4f72c785c39bf37e4c4c77b7c0eff3853a11532\": container with ID starting with f13945e95c2d7fbfd07c861df4f72c785c39bf37e4c4c77b7c0eff3853a11532 not found: ID does not exist" Feb 27 19:01:46 crc kubenswrapper[4981]: I0227 19:01:46.231385 4981 scope.go:117] "RemoveContainer" containerID="b838d06b312947bab1de9ab8399bdc2244e12ec532c7c5782c06df333cd76bce" Feb 27 19:01:46 crc kubenswrapper[4981]: E0227 19:01:46.231788 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b838d06b312947bab1de9ab8399bdc2244e12ec532c7c5782c06df333cd76bce\": container with ID starting with b838d06b312947bab1de9ab8399bdc2244e12ec532c7c5782c06df333cd76bce not found: ID does not exist" containerID="b838d06b312947bab1de9ab8399bdc2244e12ec532c7c5782c06df333cd76bce" Feb 27 19:01:46 crc kubenswrapper[4981]: I0227 19:01:46.231831 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b838d06b312947bab1de9ab8399bdc2244e12ec532c7c5782c06df333cd76bce"} err="failed to get container status \"b838d06b312947bab1de9ab8399bdc2244e12ec532c7c5782c06df333cd76bce\": rpc error: code = NotFound desc = could not find container \"b838d06b312947bab1de9ab8399bdc2244e12ec532c7c5782c06df333cd76bce\": container with ID starting with b838d06b312947bab1de9ab8399bdc2244e12ec532c7c5782c06df333cd76bce not found: ID does not exist" Feb 27 19:01:46 crc kubenswrapper[4981]: I0227 19:01:46.231855 4981 scope.go:117] "RemoveContainer" containerID="5e43c28cb9a4e2195eed9627e07cd8cda90adb956131d16e607a365d3e623082" Feb 27 19:01:46 crc kubenswrapper[4981]: E0227 19:01:46.232625 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e43c28cb9a4e2195eed9627e07cd8cda90adb956131d16e607a365d3e623082\": container with ID starting with 5e43c28cb9a4e2195eed9627e07cd8cda90adb956131d16e607a365d3e623082 not found: ID does not exist" containerID="5e43c28cb9a4e2195eed9627e07cd8cda90adb956131d16e607a365d3e623082" Feb 27 19:01:46 crc kubenswrapper[4981]: I0227 19:01:46.232656 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e43c28cb9a4e2195eed9627e07cd8cda90adb956131d16e607a365d3e623082"} err="failed to get container status \"5e43c28cb9a4e2195eed9627e07cd8cda90adb956131d16e607a365d3e623082\": rpc error: code = NotFound desc = could not find container \"5e43c28cb9a4e2195eed9627e07cd8cda90adb956131d16e607a365d3e623082\": container with ID starting with 5e43c28cb9a4e2195eed9627e07cd8cda90adb956131d16e607a365d3e623082 not found: ID does not exist" Feb 27 19:01:47 crc kubenswrapper[4981]: I0227 19:01:47.638845 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0111811-f744-4142-80b3-c25c79d7a040" path="/var/lib/kubelet/pods/e0111811-f744-4142-80b3-c25c79d7a040/volumes" Feb 27 19:01:50 crc kubenswrapper[4981]: I0227 19:01:50.248983 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:01:50 crc kubenswrapper[4981]: I0227 19:01:50.249081 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:01:50 crc kubenswrapper[4981]: I0227 19:01:50.249149 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 19:01:50 crc kubenswrapper[4981]: I0227 19:01:50.249955 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"219fa48bb79b5cd44ef23b0ba5b266e3305b85445a083e120d72a5d185159bb6"} pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:01:50 crc kubenswrapper[4981]: I0227 19:01:50.250085 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" containerID="cri-o://219fa48bb79b5cd44ef23b0ba5b266e3305b85445a083e120d72a5d185159bb6" gracePeriod=600 Feb 27 19:01:51 crc kubenswrapper[4981]: I0227 19:01:51.202672 4981 generic.go:334] "Generic (PLEG): container finished" podID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerID="219fa48bb79b5cd44ef23b0ba5b266e3305b85445a083e120d72a5d185159bb6" exitCode=0 Feb 27 19:01:51 crc kubenswrapper[4981]: I0227 19:01:51.202901 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerDied","Data":"219fa48bb79b5cd44ef23b0ba5b266e3305b85445a083e120d72a5d185159bb6"} Feb 27 19:01:51 crc kubenswrapper[4981]: I0227 19:01:51.203268 4981 scope.go:117] "RemoveContainer" containerID="7d611e423ab1d303ac9796cb0e04da4b0a780cfed24b834c8ebeafc14a8a6963" Feb 27 19:01:52 crc kubenswrapper[4981]: I0227 19:01:52.210560 4981 generic.go:334] "Generic (PLEG): container finished" podID="24537f79-2aa5-4ba1-afc0-e91183569040" containerID="aa209d296c9a731090eff0b87cf96c1e883b4098363de1e8d95f4c86cf7cf502" exitCode=0 Feb 27 19:01:52 crc kubenswrapper[4981]: I0227 19:01:52.210683 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jfmbn" event={"ID":"24537f79-2aa5-4ba1-afc0-e91183569040","Type":"ContainerDied","Data":"aa209d296c9a731090eff0b87cf96c1e883b4098363de1e8d95f4c86cf7cf502"} Feb 27 19:01:52 crc kubenswrapper[4981]: I0227 19:01:52.214376 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerStarted","Data":"295fa1abf26d7f71e7264b907ce20f7606d63942d5385b64cf4bd1f2c3c45c16"} Feb 27 19:01:52 crc kubenswrapper[4981]: I0227 19:01:52.216468 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mp522" event={"ID":"f2c01995-bfbd-4e83-bc8e-e476d7d32a4b","Type":"ContainerStarted","Data":"34a253bd0e4ecbfa69893fa41e68c331d096ba50980211a875ff23a2cf81e904"} Feb 27 19:01:52 crc kubenswrapper[4981]: I0227 19:01:52.216624 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mp522" Feb 27 19:01:53 crc kubenswrapper[4981]: I0227 19:01:53.228363 4981 generic.go:334] "Generic (PLEG): container finished" podID="24537f79-2aa5-4ba1-afc0-e91183569040" containerID="66165cc9cad8025365c4f1759235d050dc4b5b4e6d6af7f2150eb731076f43aa" exitCode=0 Feb 27 19:01:53 crc kubenswrapper[4981]: I0227 19:01:53.230040 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jfmbn" event={"ID":"24537f79-2aa5-4ba1-afc0-e91183569040","Type":"ContainerDied","Data":"66165cc9cad8025365c4f1759235d050dc4b5b4e6d6af7f2150eb731076f43aa"} Feb 27 19:01:53 crc kubenswrapper[4981]: I0227 19:01:53.273911 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mp522" podStartSLOduration=3.3270425279999998 podStartE2EDuration="13.273887185s" podCreationTimestamp="2026-02-27 19:01:40 +0000 UTC" firstStartedPulling="2026-02-27 19:01:41.516160571 +0000 UTC m=+1000.994941731" lastFinishedPulling="2026-02-27 19:01:51.463005228 +0000 UTC m=+1010.941786388" observedRunningTime="2026-02-27 19:01:52.288454924 +0000 UTC m=+1011.767236094" watchObservedRunningTime="2026-02-27 19:01:53.273887185 +0000 UTC m=+1012.752668375" Feb 27 19:01:54 crc kubenswrapper[4981]: I0227 19:01:54.242323 4981 generic.go:334] "Generic (PLEG): container finished" podID="24537f79-2aa5-4ba1-afc0-e91183569040" containerID="9af6120c25330f7173da4fe07bfc59a655767e6af9b794940f4721ca7adcda8f" exitCode=0 Feb 27 19:01:54 crc kubenswrapper[4981]: I0227 19:01:54.242436 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jfmbn" event={"ID":"24537f79-2aa5-4ba1-afc0-e91183569040","Type":"ContainerDied","Data":"9af6120c25330f7173da4fe07bfc59a655767e6af9b794940f4721ca7adcda8f"} Feb 27 19:01:56 crc kubenswrapper[4981]: I0227 19:01:56.263738 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jfmbn" event={"ID":"24537f79-2aa5-4ba1-afc0-e91183569040","Type":"ContainerStarted","Data":"06c8f0a51a76ea2b68b006bfc7971b1f2f7175050ebb06f262fc6eb7b84ee225"} Feb 27 19:01:57 crc kubenswrapper[4981]: I0227 19:01:57.278206 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jfmbn" event={"ID":"24537f79-2aa5-4ba1-afc0-e91183569040","Type":"ContainerStarted","Data":"f4e92700afc085f386f38447b5605750a156c453a913dc3e2a5b2bec8f03a4ce"} Feb 27 19:01:57 crc kubenswrapper[4981]: I0227 19:01:57.278629 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jfmbn" event={"ID":"24537f79-2aa5-4ba1-afc0-e91183569040","Type":"ContainerStarted","Data":"c497a7025c7e211f50237598299af0859c36b130a792c08a3f20d474d81682cc"} Feb 27 19:01:58 crc kubenswrapper[4981]: I0227 19:01:58.305574 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jfmbn" event={"ID":"24537f79-2aa5-4ba1-afc0-e91183569040","Type":"ContainerStarted","Data":"f898edf2de700cf54159b6b0a443d889261d4c94c3cfcd5dfed3dab309805f4f"} Feb 27 19:01:58 crc kubenswrapper[4981]: I0227 19:01:58.305960 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jfmbn" event={"ID":"24537f79-2aa5-4ba1-afc0-e91183569040","Type":"ContainerStarted","Data":"0bf6eddd0b545c8b4f2a2690e80d4b5c1edfc4cc36fcbef5e13a599de00e32b6"} Feb 27 19:01:58 crc kubenswrapper[4981]: I0227 19:01:58.305973 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jfmbn" event={"ID":"24537f79-2aa5-4ba1-afc0-e91183569040","Type":"ContainerStarted","Data":"5ae354422ad4b48b3dc58d8f2e0a0bf938136cf099ec4ead100d3045293caedf"} Feb 27 19:01:59 crc kubenswrapper[4981]: I0227 19:01:59.313567 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:01:59 crc kubenswrapper[4981]: I0227 19:01:59.349418 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-jfmbn" podStartSLOduration=10.019701033 podStartE2EDuration="19.349397091s" podCreationTimestamp="2026-02-27 19:01:40 +0000 UTC" firstStartedPulling="2026-02-27 19:01:42.216710741 +0000 UTC m=+1001.695491911" lastFinishedPulling="2026-02-27 19:01:51.546406809 +0000 UTC m=+1011.025187969" observedRunningTime="2026-02-27 19:01:59.345621686 +0000 UTC m=+1018.824402876" watchObservedRunningTime="2026-02-27 19:01:59.349397091 +0000 UTC m=+1018.828178291" Feb 27 19:02:00 crc kubenswrapper[4981]: I0227 19:02:00.142206 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536982-b7sr9"] Feb 27 19:02:00 crc kubenswrapper[4981]: E0227 19:02:00.142749 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0111811-f744-4142-80b3-c25c79d7a040" containerName="registry-server" Feb 27 19:02:00 crc kubenswrapper[4981]: I0227 19:02:00.142839 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0111811-f744-4142-80b3-c25c79d7a040" containerName="registry-server" Feb 27 19:02:00 crc kubenswrapper[4981]: E0227 19:02:00.142921 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0111811-f744-4142-80b3-c25c79d7a040" containerName="extract-content" Feb 27 19:02:00 crc kubenswrapper[4981]: I0227 19:02:00.142991 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0111811-f744-4142-80b3-c25c79d7a040" containerName="extract-content" Feb 27 19:02:00 crc kubenswrapper[4981]: E0227 19:02:00.143092 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0111811-f744-4142-80b3-c25c79d7a040" containerName="extract-utilities" Feb 27 19:02:00 crc kubenswrapper[4981]: I0227 19:02:00.143169 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0111811-f744-4142-80b3-c25c79d7a040" containerName="extract-utilities" Feb 27 19:02:00 crc kubenswrapper[4981]: I0227 19:02:00.143386 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0111811-f744-4142-80b3-c25c79d7a040" containerName="registry-server" Feb 27 19:02:00 crc kubenswrapper[4981]: I0227 19:02:00.143907 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536982-b7sr9" Feb 27 19:02:00 crc kubenswrapper[4981]: I0227 19:02:00.147628 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:02:00 crc kubenswrapper[4981]: I0227 19:02:00.147960 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:02:00 crc kubenswrapper[4981]: I0227 19:02:00.153560 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:02:00 crc kubenswrapper[4981]: I0227 19:02:00.156999 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536982-b7sr9"] Feb 27 19:02:00 crc kubenswrapper[4981]: I0227 19:02:00.198777 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7lc2\" (UniqueName: \"kubernetes.io/projected/c55cef26-7bd2-40d6-94b3-d2103eb1def6-kube-api-access-d7lc2\") pod \"auto-csr-approver-29536982-b7sr9\" (UID: \"c55cef26-7bd2-40d6-94b3-d2103eb1def6\") " pod="openshift-infra/auto-csr-approver-29536982-b7sr9" Feb 27 19:02:00 crc kubenswrapper[4981]: I0227 19:02:00.300429 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7lc2\" (UniqueName: \"kubernetes.io/projected/c55cef26-7bd2-40d6-94b3-d2103eb1def6-kube-api-access-d7lc2\") pod \"auto-csr-approver-29536982-b7sr9\" (UID: \"c55cef26-7bd2-40d6-94b3-d2103eb1def6\") " pod="openshift-infra/auto-csr-approver-29536982-b7sr9" Feb 27 19:02:00 crc kubenswrapper[4981]: I0227 19:02:00.334581 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7lc2\" (UniqueName: \"kubernetes.io/projected/c55cef26-7bd2-40d6-94b3-d2103eb1def6-kube-api-access-d7lc2\") pod \"auto-csr-approver-29536982-b7sr9\" (UID: \"c55cef26-7bd2-40d6-94b3-d2103eb1def6\") " pod="openshift-infra/auto-csr-approver-29536982-b7sr9" Feb 27 19:02:00 crc kubenswrapper[4981]: I0227 19:02:00.462343 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536982-b7sr9" Feb 27 19:02:00 crc kubenswrapper[4981]: I0227 19:02:00.720946 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536982-b7sr9"] Feb 27 19:02:00 crc kubenswrapper[4981]: W0227 19:02:00.733207 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc55cef26_7bd2_40d6_94b3_d2103eb1def6.slice/crio-efc8e20417401f3713c67da4497658414259e7b18038ea1575ecde183eaec47e WatchSource:0}: Error finding container efc8e20417401f3713c67da4497658414259e7b18038ea1575ecde183eaec47e: Status 404 returned error can't find the container with id efc8e20417401f3713c67da4497658414259e7b18038ea1575ecde183eaec47e Feb 27 19:02:01 crc kubenswrapper[4981]: I0227 19:02:01.310187 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mp522" Feb 27 19:02:01 crc kubenswrapper[4981]: I0227 19:02:01.329769 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536982-b7sr9" event={"ID":"c55cef26-7bd2-40d6-94b3-d2103eb1def6","Type":"ContainerStarted","Data":"efc8e20417401f3713c67da4497658414259e7b18038ea1575ecde183eaec47e"} Feb 27 19:02:01 crc kubenswrapper[4981]: I0227 19:02:01.420022 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-rwrfs" Feb 27 19:02:01 crc kubenswrapper[4981]: I0227 19:02:01.891544 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:02:01 crc kubenswrapper[4981]: I0227 19:02:01.959406 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:02:02 crc kubenswrapper[4981]: I0227 19:02:02.339531 4981 generic.go:334] "Generic (PLEG): container finished" podID="c55cef26-7bd2-40d6-94b3-d2103eb1def6" containerID="80437da3b8b7cf5b159153433d72b1f3efb261c96c7662ff55d71b5d465af809" exitCode=0 Feb 27 19:02:02 crc kubenswrapper[4981]: I0227 19:02:02.339639 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536982-b7sr9" event={"ID":"c55cef26-7bd2-40d6-94b3-d2103eb1def6","Type":"ContainerDied","Data":"80437da3b8b7cf5b159153433d72b1f3efb261c96c7662ff55d71b5d465af809"} Feb 27 19:02:02 crc kubenswrapper[4981]: I0227 19:02:02.899346 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-kzbf5" Feb 27 19:02:03 crc kubenswrapper[4981]: I0227 19:02:03.944644 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536982-b7sr9" Feb 27 19:02:04 crc kubenswrapper[4981]: I0227 19:02:04.053915 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7lc2\" (UniqueName: \"kubernetes.io/projected/c55cef26-7bd2-40d6-94b3-d2103eb1def6-kube-api-access-d7lc2\") pod \"c55cef26-7bd2-40d6-94b3-d2103eb1def6\" (UID: \"c55cef26-7bd2-40d6-94b3-d2103eb1def6\") " Feb 27 19:02:04 crc kubenswrapper[4981]: I0227 19:02:04.059901 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c55cef26-7bd2-40d6-94b3-d2103eb1def6-kube-api-access-d7lc2" (OuterVolumeSpecName: "kube-api-access-d7lc2") pod "c55cef26-7bd2-40d6-94b3-d2103eb1def6" (UID: "c55cef26-7bd2-40d6-94b3-d2103eb1def6"). InnerVolumeSpecName "kube-api-access-d7lc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:02:04 crc kubenswrapper[4981]: I0227 19:02:04.155992 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7lc2\" (UniqueName: \"kubernetes.io/projected/c55cef26-7bd2-40d6-94b3-d2103eb1def6-kube-api-access-d7lc2\") on node \"crc\" DevicePath \"\"" Feb 27 19:02:04 crc kubenswrapper[4981]: I0227 19:02:04.355320 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536982-b7sr9" event={"ID":"c55cef26-7bd2-40d6-94b3-d2103eb1def6","Type":"ContainerDied","Data":"efc8e20417401f3713c67da4497658414259e7b18038ea1575ecde183eaec47e"} Feb 27 19:02:04 crc kubenswrapper[4981]: I0227 19:02:04.355360 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efc8e20417401f3713c67da4497658414259e7b18038ea1575ecde183eaec47e" Feb 27 19:02:04 crc kubenswrapper[4981]: I0227 19:02:04.355409 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536982-b7sr9" Feb 27 19:02:05 crc kubenswrapper[4981]: I0227 19:02:05.157854 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536976-w4mmk"] Feb 27 19:02:05 crc kubenswrapper[4981]: I0227 19:02:05.165572 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536976-w4mmk"] Feb 27 19:02:05 crc kubenswrapper[4981]: I0227 19:02:05.637629 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53b9d1d4-db23-486c-9a1f-9ff21fc7b802" path="/var/lib/kubelet/pods/53b9d1d4-db23-486c-9a1f-9ff21fc7b802/volumes" Feb 27 19:02:06 crc kubenswrapper[4981]: I0227 19:02:06.172472 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj"] Feb 27 19:02:06 crc kubenswrapper[4981]: E0227 19:02:06.172691 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c55cef26-7bd2-40d6-94b3-d2103eb1def6" containerName="oc" Feb 27 19:02:06 crc kubenswrapper[4981]: I0227 19:02:06.172703 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c55cef26-7bd2-40d6-94b3-d2103eb1def6" containerName="oc" Feb 27 19:02:06 crc kubenswrapper[4981]: I0227 19:02:06.172808 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c55cef26-7bd2-40d6-94b3-d2103eb1def6" containerName="oc" Feb 27 19:02:06 crc kubenswrapper[4981]: I0227 19:02:06.173518 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj" Feb 27 19:02:06 crc kubenswrapper[4981]: I0227 19:02:06.175213 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 27 19:02:06 crc kubenswrapper[4981]: I0227 19:02:06.187969 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj"] Feb 27 19:02:06 crc kubenswrapper[4981]: I0227 19:02:06.280498 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8mq2\" (UniqueName: \"kubernetes.io/projected/dcffc4a2-219d-4e33-afe1-c8eab8b67ae4-kube-api-access-z8mq2\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj\" (UID: \"dcffc4a2-219d-4e33-afe1-c8eab8b67ae4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj" Feb 27 19:02:06 crc kubenswrapper[4981]: I0227 19:02:06.280573 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcffc4a2-219d-4e33-afe1-c8eab8b67ae4-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj\" (UID: \"dcffc4a2-219d-4e33-afe1-c8eab8b67ae4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj" Feb 27 19:02:06 crc kubenswrapper[4981]: I0227 19:02:06.280599 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcffc4a2-219d-4e33-afe1-c8eab8b67ae4-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj\" (UID: \"dcffc4a2-219d-4e33-afe1-c8eab8b67ae4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj" Feb 27 19:02:06 crc kubenswrapper[4981]: I0227 19:02:06.381714 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcffc4a2-219d-4e33-afe1-c8eab8b67ae4-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj\" (UID: \"dcffc4a2-219d-4e33-afe1-c8eab8b67ae4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj" Feb 27 19:02:06 crc kubenswrapper[4981]: I0227 19:02:06.381793 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcffc4a2-219d-4e33-afe1-c8eab8b67ae4-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj\" (UID: \"dcffc4a2-219d-4e33-afe1-c8eab8b67ae4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj" Feb 27 19:02:06 crc kubenswrapper[4981]: I0227 19:02:06.381905 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8mq2\" (UniqueName: \"kubernetes.io/projected/dcffc4a2-219d-4e33-afe1-c8eab8b67ae4-kube-api-access-z8mq2\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj\" (UID: \"dcffc4a2-219d-4e33-afe1-c8eab8b67ae4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj" Feb 27 19:02:06 crc kubenswrapper[4981]: I0227 19:02:06.382383 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcffc4a2-219d-4e33-afe1-c8eab8b67ae4-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj\" (UID: \"dcffc4a2-219d-4e33-afe1-c8eab8b67ae4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj" Feb 27 19:02:06 crc kubenswrapper[4981]: I0227 19:02:06.382653 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcffc4a2-219d-4e33-afe1-c8eab8b67ae4-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj\" (UID: \"dcffc4a2-219d-4e33-afe1-c8eab8b67ae4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj" Feb 27 19:02:06 crc kubenswrapper[4981]: I0227 19:02:06.406963 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8mq2\" (UniqueName: \"kubernetes.io/projected/dcffc4a2-219d-4e33-afe1-c8eab8b67ae4-kube-api-access-z8mq2\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj\" (UID: \"dcffc4a2-219d-4e33-afe1-c8eab8b67ae4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj" Feb 27 19:02:06 crc kubenswrapper[4981]: I0227 19:02:06.488290 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj" Feb 27 19:02:06 crc kubenswrapper[4981]: I0227 19:02:06.769611 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj"] Feb 27 19:02:07 crc kubenswrapper[4981]: I0227 19:02:07.381787 4981 generic.go:334] "Generic (PLEG): container finished" podID="dcffc4a2-219d-4e33-afe1-c8eab8b67ae4" containerID="32440b9305a18891274d906fbd6999eef6c98310afbed0ebd34fd6984f5bd3cd" exitCode=0 Feb 27 19:02:07 crc kubenswrapper[4981]: I0227 19:02:07.381826 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj" event={"ID":"dcffc4a2-219d-4e33-afe1-c8eab8b67ae4","Type":"ContainerDied","Data":"32440b9305a18891274d906fbd6999eef6c98310afbed0ebd34fd6984f5bd3cd"} Feb 27 19:02:07 crc kubenswrapper[4981]: I0227 19:02:07.381849 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj" event={"ID":"dcffc4a2-219d-4e33-afe1-c8eab8b67ae4","Type":"ContainerStarted","Data":"083ad949214491f123d335ff3f6638baf0a53767defd940161f88f69a27ca8df"} Feb 27 19:02:11 crc kubenswrapper[4981]: I0227 19:02:11.893452 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-jfmbn" Feb 27 19:02:13 crc kubenswrapper[4981]: I0227 19:02:13.829199 4981 generic.go:334] "Generic (PLEG): container finished" podID="dcffc4a2-219d-4e33-afe1-c8eab8b67ae4" containerID="e9569d28b7c5a74569a2de33b9dfd8230bb37bebf8534a8c6a7f8b792a6739ec" exitCode=0 Feb 27 19:02:13 crc kubenswrapper[4981]: I0227 19:02:13.829260 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj" event={"ID":"dcffc4a2-219d-4e33-afe1-c8eab8b67ae4","Type":"ContainerDied","Data":"e9569d28b7c5a74569a2de33b9dfd8230bb37bebf8534a8c6a7f8b792a6739ec"} Feb 27 19:02:14 crc kubenswrapper[4981]: I0227 19:02:14.840690 4981 generic.go:334] "Generic (PLEG): container finished" podID="dcffc4a2-219d-4e33-afe1-c8eab8b67ae4" containerID="371588c2df3ac2726e9ea49c093e460a3307bd7c20e2a47bd55bdece10db91c7" exitCode=0 Feb 27 19:02:14 crc kubenswrapper[4981]: I0227 19:02:14.840748 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj" event={"ID":"dcffc4a2-219d-4e33-afe1-c8eab8b67ae4","Type":"ContainerDied","Data":"371588c2df3ac2726e9ea49c093e460a3307bd7c20e2a47bd55bdece10db91c7"} Feb 27 19:02:18 crc kubenswrapper[4981]: I0227 19:02:18.255908 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj" Feb 27 19:02:18 crc kubenswrapper[4981]: I0227 19:02:18.444970 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8mq2\" (UniqueName: \"kubernetes.io/projected/dcffc4a2-219d-4e33-afe1-c8eab8b67ae4-kube-api-access-z8mq2\") pod \"dcffc4a2-219d-4e33-afe1-c8eab8b67ae4\" (UID: \"dcffc4a2-219d-4e33-afe1-c8eab8b67ae4\") " Feb 27 19:02:18 crc kubenswrapper[4981]: I0227 19:02:18.445083 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcffc4a2-219d-4e33-afe1-c8eab8b67ae4-bundle\") pod \"dcffc4a2-219d-4e33-afe1-c8eab8b67ae4\" (UID: \"dcffc4a2-219d-4e33-afe1-c8eab8b67ae4\") " Feb 27 19:02:18 crc kubenswrapper[4981]: I0227 19:02:18.445133 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcffc4a2-219d-4e33-afe1-c8eab8b67ae4-util\") pod \"dcffc4a2-219d-4e33-afe1-c8eab8b67ae4\" (UID: \"dcffc4a2-219d-4e33-afe1-c8eab8b67ae4\") " Feb 27 19:02:18 crc kubenswrapper[4981]: I0227 19:02:18.446997 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcffc4a2-219d-4e33-afe1-c8eab8b67ae4-bundle" (OuterVolumeSpecName: "bundle") pod "dcffc4a2-219d-4e33-afe1-c8eab8b67ae4" (UID: "dcffc4a2-219d-4e33-afe1-c8eab8b67ae4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:02:18 crc kubenswrapper[4981]: I0227 19:02:18.470569 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcffc4a2-219d-4e33-afe1-c8eab8b67ae4-util" (OuterVolumeSpecName: "util") pod "dcffc4a2-219d-4e33-afe1-c8eab8b67ae4" (UID: "dcffc4a2-219d-4e33-afe1-c8eab8b67ae4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:02:18 crc kubenswrapper[4981]: I0227 19:02:18.546582 4981 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcffc4a2-219d-4e33-afe1-c8eab8b67ae4-util\") on node \"crc\" DevicePath \"\"" Feb 27 19:02:18 crc kubenswrapper[4981]: I0227 19:02:18.546613 4981 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcffc4a2-219d-4e33-afe1-c8eab8b67ae4-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:02:18 crc kubenswrapper[4981]: I0227 19:02:18.570110 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcffc4a2-219d-4e33-afe1-c8eab8b67ae4-kube-api-access-z8mq2" (OuterVolumeSpecName: "kube-api-access-z8mq2") pod "dcffc4a2-219d-4e33-afe1-c8eab8b67ae4" (UID: "dcffc4a2-219d-4e33-afe1-c8eab8b67ae4"). InnerVolumeSpecName "kube-api-access-z8mq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:02:18 crc kubenswrapper[4981]: I0227 19:02:18.649944 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8mq2\" (UniqueName: \"kubernetes.io/projected/dcffc4a2-219d-4e33-afe1-c8eab8b67ae4-kube-api-access-z8mq2\") on node \"crc\" DevicePath \"\"" Feb 27 19:02:19 crc kubenswrapper[4981]: I0227 19:02:19.344193 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj" event={"ID":"dcffc4a2-219d-4e33-afe1-c8eab8b67ae4","Type":"ContainerDied","Data":"083ad949214491f123d335ff3f6638baf0a53767defd940161f88f69a27ca8df"} Feb 27 19:02:19 crc kubenswrapper[4981]: I0227 19:02:19.344243 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="083ad949214491f123d335ff3f6638baf0a53767defd940161f88f69a27ca8df" Feb 27 19:02:19 crc kubenswrapper[4981]: I0227 19:02:19.344341 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj" Feb 27 19:02:24 crc kubenswrapper[4981]: I0227 19:02:24.813831 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rhhlc"] Feb 27 19:02:24 crc kubenswrapper[4981]: E0227 19:02:24.814714 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcffc4a2-219d-4e33-afe1-c8eab8b67ae4" containerName="extract" Feb 27 19:02:24 crc kubenswrapper[4981]: I0227 19:02:24.814732 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcffc4a2-219d-4e33-afe1-c8eab8b67ae4" containerName="extract" Feb 27 19:02:24 crc kubenswrapper[4981]: E0227 19:02:24.814745 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcffc4a2-219d-4e33-afe1-c8eab8b67ae4" containerName="pull" Feb 27 19:02:24 crc kubenswrapper[4981]: I0227 19:02:24.814754 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcffc4a2-219d-4e33-afe1-c8eab8b67ae4" containerName="pull" Feb 27 19:02:24 crc kubenswrapper[4981]: E0227 19:02:24.814765 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcffc4a2-219d-4e33-afe1-c8eab8b67ae4" containerName="util" Feb 27 19:02:24 crc kubenswrapper[4981]: I0227 19:02:24.814772 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcffc4a2-219d-4e33-afe1-c8eab8b67ae4" containerName="util" Feb 27 19:02:24 crc kubenswrapper[4981]: I0227 19:02:24.814922 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcffc4a2-219d-4e33-afe1-c8eab8b67ae4" containerName="extract" Feb 27 19:02:24 crc kubenswrapper[4981]: I0227 19:02:24.815494 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rhhlc" Feb 27 19:02:24 crc kubenswrapper[4981]: I0227 19:02:24.819102 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 27 19:02:24 crc kubenswrapper[4981]: I0227 19:02:24.819330 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 27 19:02:24 crc kubenswrapper[4981]: I0227 19:02:24.819337 4981 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-6mn56" Feb 27 19:02:24 crc kubenswrapper[4981]: I0227 19:02:24.845824 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rhhlc"] Feb 27 19:02:24 crc kubenswrapper[4981]: I0227 19:02:24.998644 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc7d5\" (UniqueName: \"kubernetes.io/projected/78b43951-08f6-445d-8f63-8abdc16e082b-kube-api-access-fc7d5\") pod \"cert-manager-operator-controller-manager-66c8bdd694-rhhlc\" (UID: \"78b43951-08f6-445d-8f63-8abdc16e082b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rhhlc" Feb 27 19:02:24 crc kubenswrapper[4981]: I0227 19:02:24.998741 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78b43951-08f6-445d-8f63-8abdc16e082b-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-rhhlc\" (UID: \"78b43951-08f6-445d-8f63-8abdc16e082b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rhhlc" Feb 27 19:02:25 crc kubenswrapper[4981]: I0227 19:02:25.099929 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc7d5\" (UniqueName: \"kubernetes.io/projected/78b43951-08f6-445d-8f63-8abdc16e082b-kube-api-access-fc7d5\") pod \"cert-manager-operator-controller-manager-66c8bdd694-rhhlc\" (UID: \"78b43951-08f6-445d-8f63-8abdc16e082b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rhhlc" Feb 27 19:02:25 crc kubenswrapper[4981]: I0227 19:02:25.100104 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78b43951-08f6-445d-8f63-8abdc16e082b-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-rhhlc\" (UID: \"78b43951-08f6-445d-8f63-8abdc16e082b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rhhlc" Feb 27 19:02:25 crc kubenswrapper[4981]: I0227 19:02:25.100892 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78b43951-08f6-445d-8f63-8abdc16e082b-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-rhhlc\" (UID: \"78b43951-08f6-445d-8f63-8abdc16e082b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rhhlc" Feb 27 19:02:25 crc kubenswrapper[4981]: I0227 19:02:25.122332 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc7d5\" (UniqueName: \"kubernetes.io/projected/78b43951-08f6-445d-8f63-8abdc16e082b-kube-api-access-fc7d5\") pod \"cert-manager-operator-controller-manager-66c8bdd694-rhhlc\" (UID: \"78b43951-08f6-445d-8f63-8abdc16e082b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rhhlc" Feb 27 19:02:25 crc kubenswrapper[4981]: I0227 19:02:25.200803 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rhhlc" Feb 27 19:02:25 crc kubenswrapper[4981]: I0227 19:02:25.542315 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rhhlc"] Feb 27 19:02:25 crc kubenswrapper[4981]: W0227 19:02:25.548555 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78b43951_08f6_445d_8f63_8abdc16e082b.slice/crio-daa08c4c16a3aeaa0a5a0c4c95f8f71a0aa34cf911e5ca0a9e3850a166e17ef8 WatchSource:0}: Error finding container daa08c4c16a3aeaa0a5a0c4c95f8f71a0aa34cf911e5ca0a9e3850a166e17ef8: Status 404 returned error can't find the container with id daa08c4c16a3aeaa0a5a0c4c95f8f71a0aa34cf911e5ca0a9e3850a166e17ef8 Feb 27 19:02:26 crc kubenswrapper[4981]: I0227 19:02:26.388745 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rhhlc" event={"ID":"78b43951-08f6-445d-8f63-8abdc16e082b","Type":"ContainerStarted","Data":"daa08c4c16a3aeaa0a5a0c4c95f8f71a0aa34cf911e5ca0a9e3850a166e17ef8"} Feb 27 19:02:31 crc kubenswrapper[4981]: I0227 19:02:31.431138 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rhhlc" event={"ID":"78b43951-08f6-445d-8f63-8abdc16e082b","Type":"ContainerStarted","Data":"06adfb6658c657c32086a6458c4a3a30d303eb18f3d1110d37b97d9cd2b59e6b"} Feb 27 19:02:31 crc kubenswrapper[4981]: I0227 19:02:31.455086 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-rhhlc" podStartSLOduration=2.314342926 podStartE2EDuration="7.455041952s" podCreationTimestamp="2026-02-27 19:02:24 +0000 UTC" firstStartedPulling="2026-02-27 19:02:25.550850678 +0000 UTC m=+1045.029631838" lastFinishedPulling="2026-02-27 19:02:30.691549684 +0000 UTC m=+1050.170330864" observedRunningTime="2026-02-27 19:02:31.4487509 +0000 UTC m=+1050.927532100" watchObservedRunningTime="2026-02-27 19:02:31.455041952 +0000 UTC m=+1050.933823132" Feb 27 19:02:34 crc kubenswrapper[4981]: I0227 19:02:34.299543 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-k4fph"] Feb 27 19:02:34 crc kubenswrapper[4981]: I0227 19:02:34.301034 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-k4fph" Feb 27 19:02:34 crc kubenswrapper[4981]: I0227 19:02:34.303193 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 27 19:02:34 crc kubenswrapper[4981]: I0227 19:02:34.304167 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 27 19:02:34 crc kubenswrapper[4981]: I0227 19:02:34.309818 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-k4fph"] Feb 27 19:02:34 crc kubenswrapper[4981]: I0227 19:02:34.412600 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9drx\" (UniqueName: \"kubernetes.io/projected/8d477e9b-09c2-464a-8dfa-c42dc9a8a6e4-kube-api-access-n9drx\") pod \"cert-manager-webhook-6888856db4-k4fph\" (UID: \"8d477e9b-09c2-464a-8dfa-c42dc9a8a6e4\") " pod="cert-manager/cert-manager-webhook-6888856db4-k4fph" Feb 27 19:02:34 crc kubenswrapper[4981]: I0227 19:02:34.412861 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d477e9b-09c2-464a-8dfa-c42dc9a8a6e4-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-k4fph\" (UID: \"8d477e9b-09c2-464a-8dfa-c42dc9a8a6e4\") " pod="cert-manager/cert-manager-webhook-6888856db4-k4fph" Feb 27 19:02:34 crc kubenswrapper[4981]: I0227 19:02:34.514082 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9drx\" (UniqueName: \"kubernetes.io/projected/8d477e9b-09c2-464a-8dfa-c42dc9a8a6e4-kube-api-access-n9drx\") pod \"cert-manager-webhook-6888856db4-k4fph\" (UID: \"8d477e9b-09c2-464a-8dfa-c42dc9a8a6e4\") " pod="cert-manager/cert-manager-webhook-6888856db4-k4fph" Feb 27 19:02:34 crc kubenswrapper[4981]: I0227 19:02:34.514291 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d477e9b-09c2-464a-8dfa-c42dc9a8a6e4-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-k4fph\" (UID: \"8d477e9b-09c2-464a-8dfa-c42dc9a8a6e4\") " pod="cert-manager/cert-manager-webhook-6888856db4-k4fph" Feb 27 19:02:34 crc kubenswrapper[4981]: I0227 19:02:34.533794 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9drx\" (UniqueName: \"kubernetes.io/projected/8d477e9b-09c2-464a-8dfa-c42dc9a8a6e4-kube-api-access-n9drx\") pod \"cert-manager-webhook-6888856db4-k4fph\" (UID: \"8d477e9b-09c2-464a-8dfa-c42dc9a8a6e4\") " pod="cert-manager/cert-manager-webhook-6888856db4-k4fph" Feb 27 19:02:34 crc kubenswrapper[4981]: I0227 19:02:34.535108 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d477e9b-09c2-464a-8dfa-c42dc9a8a6e4-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-k4fph\" (UID: \"8d477e9b-09c2-464a-8dfa-c42dc9a8a6e4\") " pod="cert-manager/cert-manager-webhook-6888856db4-k4fph" Feb 27 19:02:34 crc kubenswrapper[4981]: I0227 19:02:34.663664 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-k4fph" Feb 27 19:02:35 crc kubenswrapper[4981]: I0227 19:02:35.057408 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-k4fph"] Feb 27 19:02:35 crc kubenswrapper[4981]: I0227 19:02:35.459354 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-k4fph" event={"ID":"8d477e9b-09c2-464a-8dfa-c42dc9a8a6e4","Type":"ContainerStarted","Data":"4eef30a3af965f043f8eb0cd15d3f46eecb228d535621ce6bd4a1e167b2095f8"} Feb 27 19:02:37 crc kubenswrapper[4981]: I0227 19:02:37.025989 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-vkhnk"] Feb 27 19:02:37 crc kubenswrapper[4981]: I0227 19:02:37.027235 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-vkhnk" Feb 27 19:02:37 crc kubenswrapper[4981]: I0227 19:02:37.030050 4981 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-9ss9d" Feb 27 19:02:37 crc kubenswrapper[4981]: I0227 19:02:37.035128 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-vkhnk"] Feb 27 19:02:37 crc kubenswrapper[4981]: I0227 19:02:37.150497 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxbt4\" (UniqueName: \"kubernetes.io/projected/757ddca8-db4b-483a-8f0f-f649431f54da-kube-api-access-bxbt4\") pod \"cert-manager-cainjector-5545bd876-vkhnk\" (UID: \"757ddca8-db4b-483a-8f0f-f649431f54da\") " pod="cert-manager/cert-manager-cainjector-5545bd876-vkhnk" Feb 27 19:02:37 crc kubenswrapper[4981]: I0227 19:02:37.150579 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/757ddca8-db4b-483a-8f0f-f649431f54da-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-vkhnk\" (UID: \"757ddca8-db4b-483a-8f0f-f649431f54da\") " pod="cert-manager/cert-manager-cainjector-5545bd876-vkhnk" Feb 27 19:02:37 crc kubenswrapper[4981]: I0227 19:02:37.252078 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/757ddca8-db4b-483a-8f0f-f649431f54da-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-vkhnk\" (UID: \"757ddca8-db4b-483a-8f0f-f649431f54da\") " pod="cert-manager/cert-manager-cainjector-5545bd876-vkhnk" Feb 27 19:02:37 crc kubenswrapper[4981]: I0227 19:02:37.252239 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxbt4\" (UniqueName: \"kubernetes.io/projected/757ddca8-db4b-483a-8f0f-f649431f54da-kube-api-access-bxbt4\") pod \"cert-manager-cainjector-5545bd876-vkhnk\" (UID: \"757ddca8-db4b-483a-8f0f-f649431f54da\") " pod="cert-manager/cert-manager-cainjector-5545bd876-vkhnk" Feb 27 19:02:37 crc kubenswrapper[4981]: I0227 19:02:37.278150 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/757ddca8-db4b-483a-8f0f-f649431f54da-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-vkhnk\" (UID: \"757ddca8-db4b-483a-8f0f-f649431f54da\") " pod="cert-manager/cert-manager-cainjector-5545bd876-vkhnk" Feb 27 19:02:37 crc kubenswrapper[4981]: I0227 19:02:37.283432 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxbt4\" (UniqueName: \"kubernetes.io/projected/757ddca8-db4b-483a-8f0f-f649431f54da-kube-api-access-bxbt4\") pod \"cert-manager-cainjector-5545bd876-vkhnk\" (UID: \"757ddca8-db4b-483a-8f0f-f649431f54da\") " pod="cert-manager/cert-manager-cainjector-5545bd876-vkhnk" Feb 27 19:02:37 crc kubenswrapper[4981]: I0227 19:02:37.367693 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-vkhnk" Feb 27 19:02:37 crc kubenswrapper[4981]: I0227 19:02:37.833320 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-vkhnk"] Feb 27 19:02:37 crc kubenswrapper[4981]: W0227 19:02:37.845359 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod757ddca8_db4b_483a_8f0f_f649431f54da.slice/crio-96a81ee0f20c2158c58967d7e1f22a808e420c6cbb1a60ec9409f661602dac7d WatchSource:0}: Error finding container 96a81ee0f20c2158c58967d7e1f22a808e420c6cbb1a60ec9409f661602dac7d: Status 404 returned error can't find the container with id 96a81ee0f20c2158c58967d7e1f22a808e420c6cbb1a60ec9409f661602dac7d Feb 27 19:02:38 crc kubenswrapper[4981]: I0227 19:02:38.480219 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-vkhnk" event={"ID":"757ddca8-db4b-483a-8f0f-f649431f54da","Type":"ContainerStarted","Data":"96a81ee0f20c2158c58967d7e1f22a808e420c6cbb1a60ec9409f661602dac7d"} Feb 27 19:02:40 crc kubenswrapper[4981]: I0227 19:02:40.494328 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-k4fph" event={"ID":"8d477e9b-09c2-464a-8dfa-c42dc9a8a6e4","Type":"ContainerStarted","Data":"8952f54345755862d08d3dd06c3fdb0dd399173f1fe06dcce3279eff33632357"} Feb 27 19:02:40 crc kubenswrapper[4981]: I0227 19:02:40.496763 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-vkhnk" event={"ID":"757ddca8-db4b-483a-8f0f-f649431f54da","Type":"ContainerStarted","Data":"b1aeddade6965f4ff1d4dd06b0ae2cb4bcb8ec858ed7d5395817bf8c9fc6c412"} Feb 27 19:02:40 crc kubenswrapper[4981]: I0227 19:02:40.518600 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-k4fph" podStartSLOduration=1.462460481 podStartE2EDuration="6.51858658s" podCreationTimestamp="2026-02-27 19:02:34 +0000 UTC" firstStartedPulling="2026-02-27 19:02:35.067742071 +0000 UTC m=+1054.546523231" lastFinishedPulling="2026-02-27 19:02:40.12386817 +0000 UTC m=+1059.602649330" observedRunningTime="2026-02-27 19:02:40.513818005 +0000 UTC m=+1059.992599165" watchObservedRunningTime="2026-02-27 19:02:40.51858658 +0000 UTC m=+1059.997367740" Feb 27 19:02:40 crc kubenswrapper[4981]: I0227 19:02:40.532651 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-vkhnk" podStartSLOduration=1.254962454 podStartE2EDuration="3.532635188s" podCreationTimestamp="2026-02-27 19:02:37 +0000 UTC" firstStartedPulling="2026-02-27 19:02:37.847471586 +0000 UTC m=+1057.326252746" lastFinishedPulling="2026-02-27 19:02:40.12514432 +0000 UTC m=+1059.603925480" observedRunningTime="2026-02-27 19:02:40.528992327 +0000 UTC m=+1060.007773487" watchObservedRunningTime="2026-02-27 19:02:40.532635188 +0000 UTC m=+1060.011416348" Feb 27 19:02:41 crc kubenswrapper[4981]: I0227 19:02:41.505293 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-k4fph" Feb 27 19:02:49 crc kubenswrapper[4981]: I0227 19:02:49.698376 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-k4fph" Feb 27 19:02:54 crc kubenswrapper[4981]: I0227 19:02:54.082514 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-wq6zs"] Feb 27 19:02:54 crc kubenswrapper[4981]: I0227 19:02:54.084421 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-wq6zs" Feb 27 19:02:54 crc kubenswrapper[4981]: I0227 19:02:54.087353 4981 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-hxnh9" Feb 27 19:02:54 crc kubenswrapper[4981]: I0227 19:02:54.105238 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-wq6zs"] Feb 27 19:02:54 crc kubenswrapper[4981]: I0227 19:02:54.156369 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxtkj\" (UniqueName: \"kubernetes.io/projected/344136f5-bd6a-4fb8-8f50-b049e04956ab-kube-api-access-xxtkj\") pod \"cert-manager-545d4d4674-wq6zs\" (UID: \"344136f5-bd6a-4fb8-8f50-b049e04956ab\") " pod="cert-manager/cert-manager-545d4d4674-wq6zs" Feb 27 19:02:54 crc kubenswrapper[4981]: I0227 19:02:54.156459 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/344136f5-bd6a-4fb8-8f50-b049e04956ab-bound-sa-token\") pod \"cert-manager-545d4d4674-wq6zs\" (UID: \"344136f5-bd6a-4fb8-8f50-b049e04956ab\") " pod="cert-manager/cert-manager-545d4d4674-wq6zs" Feb 27 19:02:54 crc kubenswrapper[4981]: I0227 19:02:54.257888 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxtkj\" (UniqueName: \"kubernetes.io/projected/344136f5-bd6a-4fb8-8f50-b049e04956ab-kube-api-access-xxtkj\") pod \"cert-manager-545d4d4674-wq6zs\" (UID: \"344136f5-bd6a-4fb8-8f50-b049e04956ab\") " pod="cert-manager/cert-manager-545d4d4674-wq6zs" Feb 27 19:02:54 crc kubenswrapper[4981]: I0227 19:02:54.257948 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/344136f5-bd6a-4fb8-8f50-b049e04956ab-bound-sa-token\") pod \"cert-manager-545d4d4674-wq6zs\" (UID: \"344136f5-bd6a-4fb8-8f50-b049e04956ab\") " pod="cert-manager/cert-manager-545d4d4674-wq6zs" Feb 27 19:02:54 crc kubenswrapper[4981]: I0227 19:02:54.291539 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/344136f5-bd6a-4fb8-8f50-b049e04956ab-bound-sa-token\") pod \"cert-manager-545d4d4674-wq6zs\" (UID: \"344136f5-bd6a-4fb8-8f50-b049e04956ab\") " pod="cert-manager/cert-manager-545d4d4674-wq6zs" Feb 27 19:02:54 crc kubenswrapper[4981]: I0227 19:02:54.294343 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxtkj\" (UniqueName: \"kubernetes.io/projected/344136f5-bd6a-4fb8-8f50-b049e04956ab-kube-api-access-xxtkj\") pod \"cert-manager-545d4d4674-wq6zs\" (UID: \"344136f5-bd6a-4fb8-8f50-b049e04956ab\") " pod="cert-manager/cert-manager-545d4d4674-wq6zs" Feb 27 19:02:54 crc kubenswrapper[4981]: I0227 19:02:54.421841 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-wq6zs" Feb 27 19:02:54 crc kubenswrapper[4981]: I0227 19:02:54.709790 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-wq6zs"] Feb 27 19:02:54 crc kubenswrapper[4981]: W0227 19:02:54.716375 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod344136f5_bd6a_4fb8_8f50_b049e04956ab.slice/crio-7eb2bb849f8c67f2901f48dac0550151194978759eee7d37474abfbc8be60ca6 WatchSource:0}: Error finding container 7eb2bb849f8c67f2901f48dac0550151194978759eee7d37474abfbc8be60ca6: Status 404 returned error can't find the container with id 7eb2bb849f8c67f2901f48dac0550151194978759eee7d37474abfbc8be60ca6 Feb 27 19:02:55 crc kubenswrapper[4981]: I0227 19:02:55.601112 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-wq6zs" event={"ID":"344136f5-bd6a-4fb8-8f50-b049e04956ab","Type":"ContainerStarted","Data":"2157bd1bbc6271dec510acbc0407cbee40523c5e39c606f7c8edfef9baf34fc8"} Feb 27 19:02:55 crc kubenswrapper[4981]: I0227 19:02:55.601433 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-wq6zs" event={"ID":"344136f5-bd6a-4fb8-8f50-b049e04956ab","Type":"ContainerStarted","Data":"7eb2bb849f8c67f2901f48dac0550151194978759eee7d37474abfbc8be60ca6"} Feb 27 19:02:55 crc kubenswrapper[4981]: I0227 19:02:55.627831 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-wq6zs" podStartSLOduration=1.627805049 podStartE2EDuration="1.627805049s" podCreationTimestamp="2026-02-27 19:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:02:55.623291972 +0000 UTC m=+1075.102073172" watchObservedRunningTime="2026-02-27 19:02:55.627805049 +0000 UTC m=+1075.106586239" Feb 27 19:03:02 crc kubenswrapper[4981]: I0227 19:03:02.856874 4981 scope.go:117] "RemoveContainer" containerID="f2103bd39e5ce4b4891daf6da2f76cc1df1178b6b341a96b23cde6cf19513719" Feb 27 19:03:04 crc kubenswrapper[4981]: I0227 19:03:04.118680 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8zq2k"] Feb 27 19:03:04 crc kubenswrapper[4981]: I0227 19:03:04.119511 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8zq2k" Feb 27 19:03:04 crc kubenswrapper[4981]: I0227 19:03:04.121814 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-tnnsm" Feb 27 19:03:04 crc kubenswrapper[4981]: I0227 19:03:04.122797 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 27 19:03:04 crc kubenswrapper[4981]: I0227 19:03:04.124275 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 27 19:03:04 crc kubenswrapper[4981]: I0227 19:03:04.136625 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8zq2k"] Feb 27 19:03:04 crc kubenswrapper[4981]: I0227 19:03:04.313151 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm25h\" (UniqueName: \"kubernetes.io/projected/c3f0e0cc-0738-470c-bb65-160a6b4e3e05-kube-api-access-bm25h\") pod \"openstack-operator-index-8zq2k\" (UID: \"c3f0e0cc-0738-470c-bb65-160a6b4e3e05\") " pod="openstack-operators/openstack-operator-index-8zq2k" Feb 27 19:03:04 crc kubenswrapper[4981]: I0227 19:03:04.414702 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm25h\" (UniqueName: \"kubernetes.io/projected/c3f0e0cc-0738-470c-bb65-160a6b4e3e05-kube-api-access-bm25h\") pod \"openstack-operator-index-8zq2k\" (UID: \"c3f0e0cc-0738-470c-bb65-160a6b4e3e05\") " pod="openstack-operators/openstack-operator-index-8zq2k" Feb 27 19:03:04 crc kubenswrapper[4981]: I0227 19:03:04.436323 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm25h\" (UniqueName: \"kubernetes.io/projected/c3f0e0cc-0738-470c-bb65-160a6b4e3e05-kube-api-access-bm25h\") pod \"openstack-operator-index-8zq2k\" (UID: \"c3f0e0cc-0738-470c-bb65-160a6b4e3e05\") " pod="openstack-operators/openstack-operator-index-8zq2k" Feb 27 19:03:04 crc kubenswrapper[4981]: I0227 19:03:04.436760 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8zq2k" Feb 27 19:03:04 crc kubenswrapper[4981]: I0227 19:03:04.656128 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8zq2k"] Feb 27 19:03:04 crc kubenswrapper[4981]: I0227 19:03:04.728873 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8zq2k" event={"ID":"c3f0e0cc-0738-470c-bb65-160a6b4e3e05","Type":"ContainerStarted","Data":"3b1676e185bcfbb12216cecd21e7160bb381f9525e2a5b07bb2fbb25971dd633"} Feb 27 19:03:07 crc kubenswrapper[4981]: I0227 19:03:07.688472 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-8zq2k"] Feb 27 19:03:08 crc kubenswrapper[4981]: I0227 19:03:08.671334 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-sbzgx"] Feb 27 19:03:08 crc kubenswrapper[4981]: I0227 19:03:08.672591 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sbzgx" Feb 27 19:03:08 crc kubenswrapper[4981]: I0227 19:03:08.682728 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sbzgx"] Feb 27 19:03:08 crc kubenswrapper[4981]: I0227 19:03:08.731377 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nnwd\" (UniqueName: \"kubernetes.io/projected/fd6567e9-7326-42da-8631-11a5b074f573-kube-api-access-7nnwd\") pod \"openstack-operator-index-sbzgx\" (UID: \"fd6567e9-7326-42da-8631-11a5b074f573\") " pod="openstack-operators/openstack-operator-index-sbzgx" Feb 27 19:03:08 crc kubenswrapper[4981]: I0227 19:03:08.833430 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nnwd\" (UniqueName: \"kubernetes.io/projected/fd6567e9-7326-42da-8631-11a5b074f573-kube-api-access-7nnwd\") pod \"openstack-operator-index-sbzgx\" (UID: \"fd6567e9-7326-42da-8631-11a5b074f573\") " pod="openstack-operators/openstack-operator-index-sbzgx" Feb 27 19:03:08 crc kubenswrapper[4981]: I0227 19:03:08.858285 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nnwd\" (UniqueName: \"kubernetes.io/projected/fd6567e9-7326-42da-8631-11a5b074f573-kube-api-access-7nnwd\") pod \"openstack-operator-index-sbzgx\" (UID: \"fd6567e9-7326-42da-8631-11a5b074f573\") " pod="openstack-operators/openstack-operator-index-sbzgx" Feb 27 19:03:08 crc kubenswrapper[4981]: I0227 19:03:08.993612 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sbzgx" Feb 27 19:03:09 crc kubenswrapper[4981]: I0227 19:03:09.443130 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sbzgx"] Feb 27 19:03:09 crc kubenswrapper[4981]: I0227 19:03:09.620586 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8zq2k" event={"ID":"c3f0e0cc-0738-470c-bb65-160a6b4e3e05","Type":"ContainerStarted","Data":"159991fb28606088871d77ce4cb4bb4d10623019ced442a4ed2dbe54da179b5f"} Feb 27 19:03:09 crc kubenswrapper[4981]: I0227 19:03:09.620718 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-8zq2k" podUID="c3f0e0cc-0738-470c-bb65-160a6b4e3e05" containerName="registry-server" containerID="cri-o://159991fb28606088871d77ce4cb4bb4d10623019ced442a4ed2dbe54da179b5f" gracePeriod=2 Feb 27 19:03:09 crc kubenswrapper[4981]: I0227 19:03:09.623249 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sbzgx" event={"ID":"fd6567e9-7326-42da-8631-11a5b074f573","Type":"ContainerStarted","Data":"60300beaeb0fc83ee543bb11eed1ffd01fce7bd872b50a8205a351bbc67c215a"} Feb 27 19:03:09 crc kubenswrapper[4981]: I0227 19:03:09.663592 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8zq2k" podStartSLOduration=2.537233481 podStartE2EDuration="5.663556217s" podCreationTimestamp="2026-02-27 19:03:04 +0000 UTC" firstStartedPulling="2026-02-27 19:03:04.665632162 +0000 UTC m=+1084.144413362" lastFinishedPulling="2026-02-27 19:03:07.791954938 +0000 UTC m=+1087.270736098" observedRunningTime="2026-02-27 19:03:09.649437147 +0000 UTC m=+1089.128218377" watchObservedRunningTime="2026-02-27 19:03:09.663556217 +0000 UTC m=+1089.142337447" Feb 27 19:03:10 crc kubenswrapper[4981]: I0227 19:03:10.623140 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8zq2k" Feb 27 19:03:10 crc kubenswrapper[4981]: I0227 19:03:10.635786 4981 generic.go:334] "Generic (PLEG): container finished" podID="c3f0e0cc-0738-470c-bb65-160a6b4e3e05" containerID="159991fb28606088871d77ce4cb4bb4d10623019ced442a4ed2dbe54da179b5f" exitCode=0 Feb 27 19:03:10 crc kubenswrapper[4981]: I0227 19:03:10.635841 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8zq2k" event={"ID":"c3f0e0cc-0738-470c-bb65-160a6b4e3e05","Type":"ContainerDied","Data":"159991fb28606088871d77ce4cb4bb4d10623019ced442a4ed2dbe54da179b5f"} Feb 27 19:03:10 crc kubenswrapper[4981]: I0227 19:03:10.635872 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8zq2k" event={"ID":"c3f0e0cc-0738-470c-bb65-160a6b4e3e05","Type":"ContainerDied","Data":"3b1676e185bcfbb12216cecd21e7160bb381f9525e2a5b07bb2fbb25971dd633"} Feb 27 19:03:10 crc kubenswrapper[4981]: I0227 19:03:10.635869 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8zq2k" Feb 27 19:03:10 crc kubenswrapper[4981]: I0227 19:03:10.636041 4981 scope.go:117] "RemoveContainer" containerID="159991fb28606088871d77ce4cb4bb4d10623019ced442a4ed2dbe54da179b5f" Feb 27 19:03:10 crc kubenswrapper[4981]: I0227 19:03:10.662685 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm25h\" (UniqueName: \"kubernetes.io/projected/c3f0e0cc-0738-470c-bb65-160a6b4e3e05-kube-api-access-bm25h\") pod \"c3f0e0cc-0738-470c-bb65-160a6b4e3e05\" (UID: \"c3f0e0cc-0738-470c-bb65-160a6b4e3e05\") " Feb 27 19:03:10 crc kubenswrapper[4981]: I0227 19:03:10.673757 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3f0e0cc-0738-470c-bb65-160a6b4e3e05-kube-api-access-bm25h" (OuterVolumeSpecName: "kube-api-access-bm25h") pod "c3f0e0cc-0738-470c-bb65-160a6b4e3e05" (UID: "c3f0e0cc-0738-470c-bb65-160a6b4e3e05"). InnerVolumeSpecName "kube-api-access-bm25h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:03:10 crc kubenswrapper[4981]: I0227 19:03:10.698745 4981 scope.go:117] "RemoveContainer" containerID="159991fb28606088871d77ce4cb4bb4d10623019ced442a4ed2dbe54da179b5f" Feb 27 19:03:10 crc kubenswrapper[4981]: E0227 19:03:10.699265 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"159991fb28606088871d77ce4cb4bb4d10623019ced442a4ed2dbe54da179b5f\": container with ID starting with 159991fb28606088871d77ce4cb4bb4d10623019ced442a4ed2dbe54da179b5f not found: ID does not exist" containerID="159991fb28606088871d77ce4cb4bb4d10623019ced442a4ed2dbe54da179b5f" Feb 27 19:03:10 crc kubenswrapper[4981]: I0227 19:03:10.699305 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159991fb28606088871d77ce4cb4bb4d10623019ced442a4ed2dbe54da179b5f"} err="failed to get container status \"159991fb28606088871d77ce4cb4bb4d10623019ced442a4ed2dbe54da179b5f\": rpc error: code = NotFound desc = could not find container \"159991fb28606088871d77ce4cb4bb4d10623019ced442a4ed2dbe54da179b5f\": container with ID starting with 159991fb28606088871d77ce4cb4bb4d10623019ced442a4ed2dbe54da179b5f not found: ID does not exist" Feb 27 19:03:10 crc kubenswrapper[4981]: I0227 19:03:10.764170 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm25h\" (UniqueName: \"kubernetes.io/projected/c3f0e0cc-0738-470c-bb65-160a6b4e3e05-kube-api-access-bm25h\") on node \"crc\" DevicePath \"\"" Feb 27 19:03:11 crc kubenswrapper[4981]: I0227 19:03:11.002178 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-8zq2k"] Feb 27 19:03:11 crc kubenswrapper[4981]: I0227 19:03:11.009530 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-8zq2k"] Feb 27 19:03:11 crc kubenswrapper[4981]: I0227 19:03:11.646193 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3f0e0cc-0738-470c-bb65-160a6b4e3e05" path="/var/lib/kubelet/pods/c3f0e0cc-0738-470c-bb65-160a6b4e3e05/volumes" Feb 27 19:03:11 crc kubenswrapper[4981]: I0227 19:03:11.647656 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sbzgx" event={"ID":"fd6567e9-7326-42da-8631-11a5b074f573","Type":"ContainerStarted","Data":"1e036f6d177e4cdd963c4e51c950f5f46e91ac7cdc634a24f4f26b198bf6a763"} Feb 27 19:03:11 crc kubenswrapper[4981]: I0227 19:03:11.677512 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-sbzgx" podStartSLOduration=2.296256889 podStartE2EDuration="3.677488934s" podCreationTimestamp="2026-02-27 19:03:08 +0000 UTC" firstStartedPulling="2026-02-27 19:03:09.449462002 +0000 UTC m=+1088.928243172" lastFinishedPulling="2026-02-27 19:03:10.830694017 +0000 UTC m=+1090.309475217" observedRunningTime="2026-02-27 19:03:11.671187212 +0000 UTC m=+1091.149968372" watchObservedRunningTime="2026-02-27 19:03:11.677488934 +0000 UTC m=+1091.156270134" Feb 27 19:03:18 crc kubenswrapper[4981]: I0227 19:03:18.994832 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-sbzgx" Feb 27 19:03:18 crc kubenswrapper[4981]: I0227 19:03:18.995378 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-sbzgx" Feb 27 19:03:19 crc kubenswrapper[4981]: I0227 19:03:19.029321 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-sbzgx" Feb 27 19:03:19 crc kubenswrapper[4981]: I0227 19:03:19.754494 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-sbzgx" Feb 27 19:03:24 crc kubenswrapper[4981]: I0227 19:03:24.599376 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg"] Feb 27 19:03:24 crc kubenswrapper[4981]: E0227 19:03:24.601563 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f0e0cc-0738-470c-bb65-160a6b4e3e05" containerName="registry-server" Feb 27 19:03:24 crc kubenswrapper[4981]: I0227 19:03:24.601607 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f0e0cc-0738-470c-bb65-160a6b4e3e05" containerName="registry-server" Feb 27 19:03:24 crc kubenswrapper[4981]: I0227 19:03:24.601767 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f0e0cc-0738-470c-bb65-160a6b4e3e05" containerName="registry-server" Feb 27 19:03:24 crc kubenswrapper[4981]: I0227 19:03:24.602932 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg" Feb 27 19:03:24 crc kubenswrapper[4981]: I0227 19:03:24.606569 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-fz2f2" Feb 27 19:03:24 crc kubenswrapper[4981]: I0227 19:03:24.613880 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg"] Feb 27 19:03:24 crc kubenswrapper[4981]: I0227 19:03:24.683823 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7b6fd45b-7ec1-45b0-b05d-a4e216ff5780-bundle\") pod \"4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg\" (UID: \"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780\") " pod="openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg" Feb 27 19:03:24 crc kubenswrapper[4981]: I0227 19:03:24.685006 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7b6fd45b-7ec1-45b0-b05d-a4e216ff5780-util\") pod \"4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg\" (UID: \"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780\") " pod="openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg" Feb 27 19:03:24 crc kubenswrapper[4981]: I0227 19:03:24.685188 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zth7m\" (UniqueName: \"kubernetes.io/projected/7b6fd45b-7ec1-45b0-b05d-a4e216ff5780-kube-api-access-zth7m\") pod \"4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg\" (UID: \"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780\") " pod="openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg" Feb 27 19:03:24 crc kubenswrapper[4981]: I0227 19:03:24.786378 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7b6fd45b-7ec1-45b0-b05d-a4e216ff5780-bundle\") pod \"4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg\" (UID: \"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780\") " pod="openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg" Feb 27 19:03:24 crc kubenswrapper[4981]: I0227 19:03:24.786768 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7b6fd45b-7ec1-45b0-b05d-a4e216ff5780-util\") pod \"4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg\" (UID: \"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780\") " pod="openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg" Feb 27 19:03:24 crc kubenswrapper[4981]: I0227 19:03:24.786887 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zth7m\" (UniqueName: \"kubernetes.io/projected/7b6fd45b-7ec1-45b0-b05d-a4e216ff5780-kube-api-access-zth7m\") pod \"4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg\" (UID: \"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780\") " pod="openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg" Feb 27 19:03:24 crc kubenswrapper[4981]: I0227 19:03:24.787536 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7b6fd45b-7ec1-45b0-b05d-a4e216ff5780-util\") pod \"4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg\" (UID: \"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780\") " pod="openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg" Feb 27 19:03:24 crc kubenswrapper[4981]: I0227 19:03:24.788038 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7b6fd45b-7ec1-45b0-b05d-a4e216ff5780-bundle\") pod \"4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg\" (UID: \"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780\") " pod="openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg" Feb 27 19:03:24 crc kubenswrapper[4981]: I0227 19:03:24.820630 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zth7m\" (UniqueName: \"kubernetes.io/projected/7b6fd45b-7ec1-45b0-b05d-a4e216ff5780-kube-api-access-zth7m\") pod \"4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg\" (UID: \"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780\") " pod="openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg" Feb 27 19:03:24 crc kubenswrapper[4981]: I0227 19:03:24.928195 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg" Feb 27 19:03:25 crc kubenswrapper[4981]: I0227 19:03:25.422634 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg"] Feb 27 19:03:25 crc kubenswrapper[4981]: I0227 19:03:25.798677 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg" event={"ID":"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780","Type":"ContainerStarted","Data":"f07fbd3d87ef4b736608bc1157f831f85f5ca783bf399c1e5ec6ef76dcd310ef"} Feb 27 19:03:25 crc kubenswrapper[4981]: I0227 19:03:25.800034 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg" event={"ID":"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780","Type":"ContainerStarted","Data":"bf61bab22b524ceaacc38b169dffe70bfa55df53e9b85e2177014d60912fe686"} Feb 27 19:03:26 crc kubenswrapper[4981]: I0227 19:03:26.808492 4981 generic.go:334] "Generic (PLEG): container finished" podID="7b6fd45b-7ec1-45b0-b05d-a4e216ff5780" containerID="f07fbd3d87ef4b736608bc1157f831f85f5ca783bf399c1e5ec6ef76dcd310ef" exitCode=0 Feb 27 19:03:26 crc kubenswrapper[4981]: I0227 19:03:26.808551 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg" event={"ID":"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780","Type":"ContainerDied","Data":"f07fbd3d87ef4b736608bc1157f831f85f5ca783bf399c1e5ec6ef76dcd310ef"} Feb 27 19:03:29 crc kubenswrapper[4981]: I0227 19:03:29.838438 4981 generic.go:334] "Generic (PLEG): container finished" podID="7b6fd45b-7ec1-45b0-b05d-a4e216ff5780" containerID="2cbaba26262bc18cf21482d3b74ecd4a56ef84a48c49e57c430f2d0418d0d37d" exitCode=0 Feb 27 19:03:29 crc kubenswrapper[4981]: I0227 19:03:29.838536 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg" event={"ID":"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780","Type":"ContainerDied","Data":"2cbaba26262bc18cf21482d3b74ecd4a56ef84a48c49e57c430f2d0418d0d37d"} Feb 27 19:03:30 crc kubenswrapper[4981]: I0227 19:03:30.849410 4981 generic.go:334] "Generic (PLEG): container finished" podID="7b6fd45b-7ec1-45b0-b05d-a4e216ff5780" containerID="22e9c3ab05ee78a45459be93f496fd2b138f87f69698baac9aac64d285d7e30c" exitCode=0 Feb 27 19:03:30 crc kubenswrapper[4981]: I0227 19:03:30.850240 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg" event={"ID":"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780","Type":"ContainerDied","Data":"22e9c3ab05ee78a45459be93f496fd2b138f87f69698baac9aac64d285d7e30c"} Feb 27 19:03:32 crc kubenswrapper[4981]: I0227 19:03:32.145448 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg" Feb 27 19:03:32 crc kubenswrapper[4981]: I0227 19:03:32.224078 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7b6fd45b-7ec1-45b0-b05d-a4e216ff5780-util\") pod \"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780\" (UID: \"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780\") " Feb 27 19:03:32 crc kubenswrapper[4981]: I0227 19:03:32.224152 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth7m\" (UniqueName: \"kubernetes.io/projected/7b6fd45b-7ec1-45b0-b05d-a4e216ff5780-kube-api-access-zth7m\") pod \"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780\" (UID: \"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780\") " Feb 27 19:03:32 crc kubenswrapper[4981]: I0227 19:03:32.224234 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7b6fd45b-7ec1-45b0-b05d-a4e216ff5780-bundle\") pod \"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780\" (UID: \"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780\") " Feb 27 19:03:32 crc kubenswrapper[4981]: I0227 19:03:32.225174 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b6fd45b-7ec1-45b0-b05d-a4e216ff5780-bundle" (OuterVolumeSpecName: "bundle") pod "7b6fd45b-7ec1-45b0-b05d-a4e216ff5780" (UID: "7b6fd45b-7ec1-45b0-b05d-a4e216ff5780"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:03:32 crc kubenswrapper[4981]: I0227 19:03:32.230099 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b6fd45b-7ec1-45b0-b05d-a4e216ff5780-kube-api-access-zth7m" (OuterVolumeSpecName: "kube-api-access-zth7m") pod "7b6fd45b-7ec1-45b0-b05d-a4e216ff5780" (UID: "7b6fd45b-7ec1-45b0-b05d-a4e216ff5780"). InnerVolumeSpecName "kube-api-access-zth7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:03:32 crc kubenswrapper[4981]: I0227 19:03:32.234381 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b6fd45b-7ec1-45b0-b05d-a4e216ff5780-util" (OuterVolumeSpecName: "util") pod "7b6fd45b-7ec1-45b0-b05d-a4e216ff5780" (UID: "7b6fd45b-7ec1-45b0-b05d-a4e216ff5780"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:03:32 crc kubenswrapper[4981]: I0227 19:03:32.325927 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zth7m\" (UniqueName: \"kubernetes.io/projected/7b6fd45b-7ec1-45b0-b05d-a4e216ff5780-kube-api-access-zth7m\") on node \"crc\" DevicePath \"\"" Feb 27 19:03:32 crc kubenswrapper[4981]: I0227 19:03:32.325978 4981 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7b6fd45b-7ec1-45b0-b05d-a4e216ff5780-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:03:32 crc kubenswrapper[4981]: I0227 19:03:32.325998 4981 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7b6fd45b-7ec1-45b0-b05d-a4e216ff5780-util\") on node \"crc\" DevicePath \"\"" Feb 27 19:03:32 crc kubenswrapper[4981]: I0227 19:03:32.865531 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg" event={"ID":"7b6fd45b-7ec1-45b0-b05d-a4e216ff5780","Type":"ContainerDied","Data":"bf61bab22b524ceaacc38b169dffe70bfa55df53e9b85e2177014d60912fe686"} Feb 27 19:03:32 crc kubenswrapper[4981]: I0227 19:03:32.865590 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf61bab22b524ceaacc38b169dffe70bfa55df53e9b85e2177014d60912fe686" Feb 27 19:03:32 crc kubenswrapper[4981]: I0227 19:03:32.865628 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg" Feb 27 19:03:36 crc kubenswrapper[4981]: I0227 19:03:36.863946 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7698fb7476-ljffl"] Feb 27 19:03:36 crc kubenswrapper[4981]: E0227 19:03:36.864623 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b6fd45b-7ec1-45b0-b05d-a4e216ff5780" containerName="pull" Feb 27 19:03:36 crc kubenswrapper[4981]: I0227 19:03:36.864645 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b6fd45b-7ec1-45b0-b05d-a4e216ff5780" containerName="pull" Feb 27 19:03:36 crc kubenswrapper[4981]: E0227 19:03:36.864667 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b6fd45b-7ec1-45b0-b05d-a4e216ff5780" containerName="util" Feb 27 19:03:36 crc kubenswrapper[4981]: I0227 19:03:36.864680 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b6fd45b-7ec1-45b0-b05d-a4e216ff5780" containerName="util" Feb 27 19:03:36 crc kubenswrapper[4981]: E0227 19:03:36.864715 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b6fd45b-7ec1-45b0-b05d-a4e216ff5780" containerName="extract" Feb 27 19:03:36 crc kubenswrapper[4981]: I0227 19:03:36.864731 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b6fd45b-7ec1-45b0-b05d-a4e216ff5780" containerName="extract" Feb 27 19:03:36 crc kubenswrapper[4981]: I0227 19:03:36.864933 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b6fd45b-7ec1-45b0-b05d-a4e216ff5780" containerName="extract" Feb 27 19:03:36 crc kubenswrapper[4981]: I0227 19:03:36.866219 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7698fb7476-ljffl" Feb 27 19:03:36 crc kubenswrapper[4981]: I0227 19:03:36.867818 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-4kx74" Feb 27 19:03:36 crc kubenswrapper[4981]: I0227 19:03:36.882530 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7698fb7476-ljffl"] Feb 27 19:03:36 crc kubenswrapper[4981]: I0227 19:03:36.905677 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pkdc\" (UniqueName: \"kubernetes.io/projected/ded84d09-908f-47fd-b75b-25013113939f-kube-api-access-7pkdc\") pod \"openstack-operator-controller-init-7698fb7476-ljffl\" (UID: \"ded84d09-908f-47fd-b75b-25013113939f\") " pod="openstack-operators/openstack-operator-controller-init-7698fb7476-ljffl" Feb 27 19:03:37 crc kubenswrapper[4981]: I0227 19:03:37.007215 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pkdc\" (UniqueName: \"kubernetes.io/projected/ded84d09-908f-47fd-b75b-25013113939f-kube-api-access-7pkdc\") pod \"openstack-operator-controller-init-7698fb7476-ljffl\" (UID: \"ded84d09-908f-47fd-b75b-25013113939f\") " pod="openstack-operators/openstack-operator-controller-init-7698fb7476-ljffl" Feb 27 19:03:37 crc kubenswrapper[4981]: I0227 19:03:37.023960 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pkdc\" (UniqueName: \"kubernetes.io/projected/ded84d09-908f-47fd-b75b-25013113939f-kube-api-access-7pkdc\") pod \"openstack-operator-controller-init-7698fb7476-ljffl\" (UID: \"ded84d09-908f-47fd-b75b-25013113939f\") " pod="openstack-operators/openstack-operator-controller-init-7698fb7476-ljffl" Feb 27 19:03:37 crc kubenswrapper[4981]: I0227 19:03:37.185403 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7698fb7476-ljffl" Feb 27 19:03:37 crc kubenswrapper[4981]: I0227 19:03:37.618699 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7698fb7476-ljffl"] Feb 27 19:03:37 crc kubenswrapper[4981]: I0227 19:03:37.898533 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7698fb7476-ljffl" event={"ID":"ded84d09-908f-47fd-b75b-25013113939f","Type":"ContainerStarted","Data":"47a59599aa2773d15cc1196ff084a0c6efa8d5bcfd67e4b78b41d76c97547ff5"} Feb 27 19:03:44 crc kubenswrapper[4981]: I0227 19:03:44.971086 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7698fb7476-ljffl" event={"ID":"ded84d09-908f-47fd-b75b-25013113939f","Type":"ContainerStarted","Data":"3a69aba85f7966a555e1bfd64e2f3b07f1185a5813af7eff44250b170acf5308"} Feb 27 19:03:44 crc kubenswrapper[4981]: I0227 19:03:44.971664 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7698fb7476-ljffl" Feb 27 19:03:45 crc kubenswrapper[4981]: I0227 19:03:45.007250 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7698fb7476-ljffl" podStartSLOduration=2.024647275 podStartE2EDuration="9.007227524s" podCreationTimestamp="2026-02-27 19:03:36 +0000 UTC" firstStartedPulling="2026-02-27 19:03:37.644261984 +0000 UTC m=+1117.123043164" lastFinishedPulling="2026-02-27 19:03:44.626842253 +0000 UTC m=+1124.105623413" observedRunningTime="2026-02-27 19:03:45.003269474 +0000 UTC m=+1124.482050664" watchObservedRunningTime="2026-02-27 19:03:45.007227524 +0000 UTC m=+1124.486008704" Feb 27 19:03:57 crc kubenswrapper[4981]: I0227 19:03:57.192403 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7698fb7476-ljffl" Feb 27 19:04:00 crc kubenswrapper[4981]: I0227 19:04:00.290891 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536984-jjx78"] Feb 27 19:04:00 crc kubenswrapper[4981]: I0227 19:04:00.292122 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536984-jjx78" Feb 27 19:04:00 crc kubenswrapper[4981]: I0227 19:04:00.294253 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:04:00 crc kubenswrapper[4981]: I0227 19:04:00.294623 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:04:00 crc kubenswrapper[4981]: I0227 19:04:00.294650 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:04:00 crc kubenswrapper[4981]: I0227 19:04:00.300571 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536984-jjx78"] Feb 27 19:04:00 crc kubenswrapper[4981]: I0227 19:04:00.694682 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx4rf\" (UniqueName: \"kubernetes.io/projected/cce3f7fc-3761-458a-91ed-53ff41805400-kube-api-access-qx4rf\") pod \"auto-csr-approver-29536984-jjx78\" (UID: \"cce3f7fc-3761-458a-91ed-53ff41805400\") " pod="openshift-infra/auto-csr-approver-29536984-jjx78" Feb 27 19:04:00 crc kubenswrapper[4981]: I0227 19:04:00.796536 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx4rf\" (UniqueName: \"kubernetes.io/projected/cce3f7fc-3761-458a-91ed-53ff41805400-kube-api-access-qx4rf\") pod \"auto-csr-approver-29536984-jjx78\" (UID: \"cce3f7fc-3761-458a-91ed-53ff41805400\") " pod="openshift-infra/auto-csr-approver-29536984-jjx78" Feb 27 19:04:00 crc kubenswrapper[4981]: I0227 19:04:00.822325 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx4rf\" (UniqueName: \"kubernetes.io/projected/cce3f7fc-3761-458a-91ed-53ff41805400-kube-api-access-qx4rf\") pod \"auto-csr-approver-29536984-jjx78\" (UID: \"cce3f7fc-3761-458a-91ed-53ff41805400\") " pod="openshift-infra/auto-csr-approver-29536984-jjx78" Feb 27 19:04:00 crc kubenswrapper[4981]: I0227 19:04:00.914910 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536984-jjx78" Feb 27 19:04:01 crc kubenswrapper[4981]: I0227 19:04:01.578785 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536984-jjx78"] Feb 27 19:04:02 crc kubenswrapper[4981]: I0227 19:04:02.403997 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536984-jjx78" event={"ID":"cce3f7fc-3761-458a-91ed-53ff41805400","Type":"ContainerStarted","Data":"3cbb7e0e4d0fb683fc88b45577478b1a779ad461070923fce087d05f577c1404"} Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.624748 4981 generic.go:334] "Generic (PLEG): container finished" podID="cce3f7fc-3761-458a-91ed-53ff41805400" containerID="37b76be44e8849910777036c66ac1eee0d414433ae844364d0a0017f22fd72cb" exitCode=0 Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.624878 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536984-jjx78" event={"ID":"cce3f7fc-3761-458a-91ed-53ff41805400","Type":"ContainerDied","Data":"37b76be44e8849910777036c66ac1eee0d414433ae844364d0a0017f22fd72cb"} Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.746704 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-9fqts"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.747542 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-9fqts" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.750033 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-nm6d9" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.757286 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-jtd4l"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.758097 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-jtd4l" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.760721 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-9vh5w" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.762580 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-jtd4l"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.766761 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-9fqts"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.773883 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-sbs2q"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.774709 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-sbs2q" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.788779 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z44xx\" (UniqueName: \"kubernetes.io/projected/fdf56547-b3d3-4481-acea-493c4ea4b2d9-kube-api-access-z44xx\") pod \"barbican-operator-controller-manager-6db6876945-jtd4l\" (UID: \"fdf56547-b3d3-4481-acea-493c4ea4b2d9\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-jtd4l" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.788832 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdfkk\" (UniqueName: \"kubernetes.io/projected/7a1c1676-014d-4de0-ab20-a951ad5bb7fe-kube-api-access-kdfkk\") pod \"cinder-operator-controller-manager-55d77d7b5c-9fqts\" (UID: \"7a1c1676-014d-4de0-ab20-a951ad5bb7fe\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-9fqts" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.788872 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66lww\" (UniqueName: \"kubernetes.io/projected/12a33549-4f35-4fe1-851c-21e46a44dff6-kube-api-access-66lww\") pod \"designate-operator-controller-manager-5d87c9d997-sbs2q\" (UID: \"12a33549-4f35-4fe1-851c-21e46a44dff6\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-sbs2q" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.792657 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-sbs2q"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.793840 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-k2tnl" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.799445 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-tjvzl"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.800229 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-tjvzl" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.804495 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-p74nh" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.807927 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-59gjc"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.823830 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-59gjc" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.828955 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-q6slm" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.838209 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-59gjc"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.847726 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-tjvzl"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.854273 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-dhczx"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.861150 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-dhczx" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.866676 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-zrrzs" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.871152 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-dhczx"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.891032 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.891806 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.893618 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-mjbc5" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.893809 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.894830 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z44xx\" (UniqueName: \"kubernetes.io/projected/fdf56547-b3d3-4481-acea-493c4ea4b2d9-kube-api-access-z44xx\") pod \"barbican-operator-controller-manager-6db6876945-jtd4l\" (UID: \"fdf56547-b3d3-4481-acea-493c4ea4b2d9\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-jtd4l" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.894861 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdfkk\" (UniqueName: \"kubernetes.io/projected/7a1c1676-014d-4de0-ab20-a951ad5bb7fe-kube-api-access-kdfkk\") pod \"cinder-operator-controller-manager-55d77d7b5c-9fqts\" (UID: \"7a1c1676-014d-4de0-ab20-a951ad5bb7fe\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-9fqts" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.894893 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66lww\" (UniqueName: \"kubernetes.io/projected/12a33549-4f35-4fe1-851c-21e46a44dff6-kube-api-access-66lww\") pod \"designate-operator-controller-manager-5d87c9d997-sbs2q\" (UID: \"12a33549-4f35-4fe1-851c-21e46a44dff6\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-sbs2q" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.905172 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-6kvkg"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.907164 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kvkg" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.908641 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-k2d48" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.925830 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z44xx\" (UniqueName: \"kubernetes.io/projected/fdf56547-b3d3-4481-acea-493c4ea4b2d9-kube-api-access-z44xx\") pod \"barbican-operator-controller-manager-6db6876945-jtd4l\" (UID: \"fdf56547-b3d3-4481-acea-493c4ea4b2d9\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-jtd4l" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.926961 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdfkk\" (UniqueName: \"kubernetes.io/projected/7a1c1676-014d-4de0-ab20-a951ad5bb7fe-kube-api-access-kdfkk\") pod \"cinder-operator-controller-manager-55d77d7b5c-9fqts\" (UID: \"7a1c1676-014d-4de0-ab20-a951ad5bb7fe\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-9fqts" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.928301 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66lww\" (UniqueName: \"kubernetes.io/projected/12a33549-4f35-4fe1-851c-21e46a44dff6-kube-api-access-66lww\") pod \"designate-operator-controller-manager-5d87c9d997-sbs2q\" (UID: \"12a33549-4f35-4fe1-851c-21e46a44dff6\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-sbs2q" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.929754 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55ffd4876b-sblkk"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.930782 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-sblkk" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.936364 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.938852 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-jck7m" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.951355 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-6kvkg"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.951582 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55ffd4876b-sblkk"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.956589 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-6fkss"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.957490 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-6fkss" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.960080 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rfkwv" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.969063 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-556b8b874-k2kn8"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.969945 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-k2kn8" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.972186 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-5g2w2" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.973626 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-6fkss"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.989354 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-6xfvc"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.990106 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-6xfvc" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.993319 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-556b8b874-k2kn8"] Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.995457 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-22vw4" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.997374 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl2cs\" (UniqueName: \"kubernetes.io/projected/2cd1d521-194a-48fa-9412-a95ff0c2c598-kube-api-access-wl2cs\") pod \"glance-operator-controller-manager-64db6967f8-tjvzl\" (UID: \"2cd1d521-194a-48fa-9412-a95ff0c2c598\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-tjvzl" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.997529 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2wzt\" (UniqueName: \"kubernetes.io/projected/2042b3a5-c802-49c2-911b-b28eb19aecf5-kube-api-access-b2wzt\") pod \"infra-operator-controller-manager-f7fcc58b9-r27km\" (UID: \"2042b3a5-c802-49c2-911b-b28eb19aecf5\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.997986 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlnhs\" (UniqueName: \"kubernetes.io/projected/2741c246-6bf8-411d-bdd7-29cb20588c0c-kube-api-access-jlnhs\") pod \"heat-operator-controller-manager-cf99c678f-59gjc\" (UID: \"2741c246-6bf8-411d-bdd7-29cb20588c0c\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-59gjc" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.998103 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-r27km\" (UID: \"2042b3a5-c802-49c2-911b-b28eb19aecf5\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.998230 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kphpx\" (UniqueName: \"kubernetes.io/projected/a21f22e5-6cc2-43cc-890c-c9e42d8b12c5-kube-api-access-kphpx\") pod \"horizon-operator-controller-manager-78bc7f9bd9-dhczx\" (UID: \"a21f22e5-6cc2-43cc-890c-c9e42d8b12c5\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-dhczx" Feb 27 19:04:17 crc kubenswrapper[4981]: I0227 19:04:17.998784 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-6xfvc"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.013908 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-bq9tz"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.014694 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-bq9tz" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.023946 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-sjqf2" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.046108 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-bq9tz"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.048807 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-jvb62"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.049745 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-jvb62" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.055188 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-5jnf4" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.063475 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-9fqts" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.069433 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-jvb62"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.081999 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.088166 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.088909 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-jtd4l" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.102288 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.102467 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-vkv7h" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.104543 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-sbs2q" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.105320 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kphpx\" (UniqueName: \"kubernetes.io/projected/a21f22e5-6cc2-43cc-890c-c9e42d8b12c5-kube-api-access-kphpx\") pod \"horizon-operator-controller-manager-78bc7f9bd9-dhczx\" (UID: \"a21f22e5-6cc2-43cc-890c-c9e42d8b12c5\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-dhczx" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.105360 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl2cs\" (UniqueName: \"kubernetes.io/projected/2cd1d521-194a-48fa-9412-a95ff0c2c598-kube-api-access-wl2cs\") pod \"glance-operator-controller-manager-64db6967f8-tjvzl\" (UID: \"2cd1d521-194a-48fa-9412-a95ff0c2c598\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-tjvzl" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.105383 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtv52\" (UniqueName: \"kubernetes.io/projected/5e144a53-3c1b-49db-9f08-d93ebe9fb576-kube-api-access-mtv52\") pod \"ironic-operator-controller-manager-545456dc4-6kvkg\" (UID: \"5e144a53-3c1b-49db-9f08-d93ebe9fb576\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kvkg" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.105404 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7h5d\" (UniqueName: \"kubernetes.io/projected/67789b9f-79ac-4901-8acc-22a86fb876c4-kube-api-access-b7h5d\") pod \"manila-operator-controller-manager-67d996989d-6fkss\" (UID: \"67789b9f-79ac-4901-8acc-22a86fb876c4\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-6fkss" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.105433 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2wzt\" (UniqueName: \"kubernetes.io/projected/2042b3a5-c802-49c2-911b-b28eb19aecf5-kube-api-access-b2wzt\") pod \"infra-operator-controller-manager-f7fcc58b9-r27km\" (UID: \"2042b3a5-c802-49c2-911b-b28eb19aecf5\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.105450 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlnhs\" (UniqueName: \"kubernetes.io/projected/2741c246-6bf8-411d-bdd7-29cb20588c0c-kube-api-access-jlnhs\") pod \"heat-operator-controller-manager-cf99c678f-59gjc\" (UID: \"2741c246-6bf8-411d-bdd7-29cb20588c0c\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-59gjc" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.105472 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsp9t\" (UniqueName: \"kubernetes.io/projected/96d76f06-213f-4b51-9dfa-7e77c5b97174-kube-api-access-gsp9t\") pod \"keystone-operator-controller-manager-55ffd4876b-sblkk\" (UID: \"96d76f06-213f-4b51-9dfa-7e77c5b97174\") " pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-sblkk" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.105497 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-r27km\" (UID: \"2042b3a5-c802-49c2-911b-b28eb19aecf5\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.105528 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlzn8\" (UniqueName: \"kubernetes.io/projected/e1c487e5-53af-41ef-8713-87d17ab9632d-kube-api-access-wlzn8\") pod \"mariadb-operator-controller-manager-556b8b874-k2kn8\" (UID: \"e1c487e5-53af-41ef-8713-87d17ab9632d\") " pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-k2kn8" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.105560 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cflq\" (UniqueName: \"kubernetes.io/projected/1636f598-89d5-474c-85a9-69ea06f889de-kube-api-access-5cflq\") pod \"neutron-operator-controller-manager-54688575f-6xfvc\" (UID: \"1636f598-89d5-474c-85a9-69ea06f889de\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-6xfvc" Feb 27 19:04:18 crc kubenswrapper[4981]: E0227 19:04:18.106152 4981 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 19:04:18 crc kubenswrapper[4981]: E0227 19:04:18.110208 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert podName:2042b3a5-c802-49c2-911b-b28eb19aecf5 nodeName:}" failed. No retries permitted until 2026-02-27 19:04:18.61019033 +0000 UTC m=+1158.088971490 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert") pod "infra-operator-controller-manager-f7fcc58b9-r27km" (UID: "2042b3a5-c802-49c2-911b-b28eb19aecf5") : secret "infra-operator-webhook-server-cert" not found Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.134713 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-h9cbz"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.135877 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-h9cbz" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.136513 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kphpx\" (UniqueName: \"kubernetes.io/projected/a21f22e5-6cc2-43cc-890c-c9e42d8b12c5-kube-api-access-kphpx\") pod \"horizon-operator-controller-manager-78bc7f9bd9-dhczx\" (UID: \"a21f22e5-6cc2-43cc-890c-c9e42d8b12c5\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-dhczx" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.143861 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlnhs\" (UniqueName: \"kubernetes.io/projected/2741c246-6bf8-411d-bdd7-29cb20588c0c-kube-api-access-jlnhs\") pod \"heat-operator-controller-manager-cf99c678f-59gjc\" (UID: \"2741c246-6bf8-411d-bdd7-29cb20588c0c\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-59gjc" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.152603 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-g74mq" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.152733 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2wzt\" (UniqueName: \"kubernetes.io/projected/2042b3a5-c802-49c2-911b-b28eb19aecf5-kube-api-access-b2wzt\") pod \"infra-operator-controller-manager-f7fcc58b9-r27km\" (UID: \"2042b3a5-c802-49c2-911b-b28eb19aecf5\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.155429 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl2cs\" (UniqueName: \"kubernetes.io/projected/2cd1d521-194a-48fa-9412-a95ff0c2c598-kube-api-access-wl2cs\") pod \"glance-operator-controller-manager-64db6967f8-tjvzl\" (UID: \"2cd1d521-194a-48fa-9412-a95ff0c2c598\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-tjvzl" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.156752 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-bvts5"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.157582 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-bvts5" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.159371 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-r2jwt" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.166041 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-59gjc" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.201337 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-dhczx" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.211790 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb6rg\" (UniqueName: \"kubernetes.io/projected/70ce2fb0-509d-4f5a-aff5-8b71df9f78c4-kube-api-access-fb6rg\") pod \"octavia-operator-controller-manager-5d86c7ddb7-jvb62\" (UID: \"70ce2fb0-509d-4f5a-aff5-8b71df9f78c4\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-jvb62" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.211838 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsp9t\" (UniqueName: \"kubernetes.io/projected/96d76f06-213f-4b51-9dfa-7e77c5b97174-kube-api-access-gsp9t\") pod \"keystone-operator-controller-manager-55ffd4876b-sblkk\" (UID: \"96d76f06-213f-4b51-9dfa-7e77c5b97174\") " pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-sblkk" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.211861 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66fr2\" (UniqueName: \"kubernetes.io/projected/7cbe4d2e-bd57-452d-b873-709e1de024e7-kube-api-access-66fr2\") pod \"placement-operator-controller-manager-648564c9fc-bvts5\" (UID: \"7cbe4d2e-bd57-452d-b873-709e1de024e7\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-bvts5" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.211908 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qpfz\" (UniqueName: \"kubernetes.io/projected/8120c80b-1df9-4534-b5c6-1ff42e7dd5f9-kube-api-access-6qpfz\") pod \"nova-operator-controller-manager-74b6b5dc96-bq9tz\" (UID: \"8120c80b-1df9-4534-b5c6-1ff42e7dd5f9\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-bq9tz" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.211926 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9chxvph\" (UID: \"87494b7e-5ff9-4bbf-b2b6-848c5d9269dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.211949 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlzn8\" (UniqueName: \"kubernetes.io/projected/e1c487e5-53af-41ef-8713-87d17ab9632d-kube-api-access-wlzn8\") pod \"mariadb-operator-controller-manager-556b8b874-k2kn8\" (UID: \"e1c487e5-53af-41ef-8713-87d17ab9632d\") " pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-k2kn8" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.211973 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cflq\" (UniqueName: \"kubernetes.io/projected/1636f598-89d5-474c-85a9-69ea06f889de-kube-api-access-5cflq\") pod \"neutron-operator-controller-manager-54688575f-6xfvc\" (UID: \"1636f598-89d5-474c-85a9-69ea06f889de\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-6xfvc" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.212017 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9q52\" (UniqueName: \"kubernetes.io/projected/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-kube-api-access-b9q52\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9chxvph\" (UID: \"87494b7e-5ff9-4bbf-b2b6-848c5d9269dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.212040 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtv52\" (UniqueName: \"kubernetes.io/projected/5e144a53-3c1b-49db-9f08-d93ebe9fb576-kube-api-access-mtv52\") pod \"ironic-operator-controller-manager-545456dc4-6kvkg\" (UID: \"5e144a53-3c1b-49db-9f08-d93ebe9fb576\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kvkg" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.212071 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7h5d\" (UniqueName: \"kubernetes.io/projected/67789b9f-79ac-4901-8acc-22a86fb876c4-kube-api-access-b7h5d\") pod \"manila-operator-controller-manager-67d996989d-6fkss\" (UID: \"67789b9f-79ac-4901-8acc-22a86fb876c4\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-6fkss" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.212090 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq68p\" (UniqueName: \"kubernetes.io/projected/e9987372-8f11-4038-939a-75d1152e5667-kube-api-access-sq68p\") pod \"ovn-operator-controller-manager-75684d597f-h9cbz\" (UID: \"e9987372-8f11-4038-939a-75d1152e5667\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-h9cbz" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.219806 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-h9cbz"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.238498 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-bvts5"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.252463 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtv52\" (UniqueName: \"kubernetes.io/projected/5e144a53-3c1b-49db-9f08-d93ebe9fb576-kube-api-access-mtv52\") pod \"ironic-operator-controller-manager-545456dc4-6kvkg\" (UID: \"5e144a53-3c1b-49db-9f08-d93ebe9fb576\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kvkg" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.254321 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-xzqdb"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.280252 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cflq\" (UniqueName: \"kubernetes.io/projected/1636f598-89d5-474c-85a9-69ea06f889de-kube-api-access-5cflq\") pod \"neutron-operator-controller-manager-54688575f-6xfvc\" (UID: \"1636f598-89d5-474c-85a9-69ea06f889de\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-6xfvc" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.282928 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7h5d\" (UniqueName: \"kubernetes.io/projected/67789b9f-79ac-4901-8acc-22a86fb876c4-kube-api-access-b7h5d\") pod \"manila-operator-controller-manager-67d996989d-6fkss\" (UID: \"67789b9f-79ac-4901-8acc-22a86fb876c4\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-6fkss" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.283041 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-xzqdb" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.283403 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kvkg" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.298213 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlzn8\" (UniqueName: \"kubernetes.io/projected/e1c487e5-53af-41ef-8713-87d17ab9632d-kube-api-access-wlzn8\") pod \"mariadb-operator-controller-manager-556b8b874-k2kn8\" (UID: \"e1c487e5-53af-41ef-8713-87d17ab9632d\") " pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-k2kn8" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.292359 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-6fkss" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.284641 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsp9t\" (UniqueName: \"kubernetes.io/projected/96d76f06-213f-4b51-9dfa-7e77c5b97174-kube-api-access-gsp9t\") pod \"keystone-operator-controller-manager-55ffd4876b-sblkk\" (UID: \"96d76f06-213f-4b51-9dfa-7e77c5b97174\") " pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-sblkk" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.296340 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.299515 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4lwl5" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.307404 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-xzqdb"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.309832 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-6xfvc" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.313390 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb6rg\" (UniqueName: \"kubernetes.io/projected/70ce2fb0-509d-4f5a-aff5-8b71df9f78c4-kube-api-access-fb6rg\") pod \"octavia-operator-controller-manager-5d86c7ddb7-jvb62\" (UID: \"70ce2fb0-509d-4f5a-aff5-8b71df9f78c4\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-jvb62" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.315439 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66fr2\" (UniqueName: \"kubernetes.io/projected/7cbe4d2e-bd57-452d-b873-709e1de024e7-kube-api-access-66fr2\") pod \"placement-operator-controller-manager-648564c9fc-bvts5\" (UID: \"7cbe4d2e-bd57-452d-b873-709e1de024e7\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-bvts5" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.315759 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qpfz\" (UniqueName: \"kubernetes.io/projected/8120c80b-1df9-4534-b5c6-1ff42e7dd5f9-kube-api-access-6qpfz\") pod \"nova-operator-controller-manager-74b6b5dc96-bq9tz\" (UID: \"8120c80b-1df9-4534-b5c6-1ff42e7dd5f9\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-bq9tz" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.315786 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9chxvph\" (UID: \"87494b7e-5ff9-4bbf-b2b6-848c5d9269dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.315869 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9q52\" (UniqueName: \"kubernetes.io/projected/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-kube-api-access-b9q52\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9chxvph\" (UID: \"87494b7e-5ff9-4bbf-b2b6-848c5d9269dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.315912 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq68p\" (UniqueName: \"kubernetes.io/projected/e9987372-8f11-4038-939a-75d1152e5667-kube-api-access-sq68p\") pod \"ovn-operator-controller-manager-75684d597f-h9cbz\" (UID: \"e9987372-8f11-4038-939a-75d1152e5667\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-h9cbz" Feb 27 19:04:18 crc kubenswrapper[4981]: E0227 19:04:18.316460 4981 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 19:04:18 crc kubenswrapper[4981]: E0227 19:04:18.316545 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert podName:87494b7e-5ff9-4bbf-b2b6-848c5d9269dc nodeName:}" failed. No retries permitted until 2026-02-27 19:04:18.816529139 +0000 UTC m=+1158.295310299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" (UID: "87494b7e-5ff9-4bbf-b2b6-848c5d9269dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.351305 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq68p\" (UniqueName: \"kubernetes.io/projected/e9987372-8f11-4038-939a-75d1152e5667-kube-api-access-sq68p\") pod \"ovn-operator-controller-manager-75684d597f-h9cbz\" (UID: \"e9987372-8f11-4038-939a-75d1152e5667\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-h9cbz" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.352807 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb6rg\" (UniqueName: \"kubernetes.io/projected/70ce2fb0-509d-4f5a-aff5-8b71df9f78c4-kube-api-access-fb6rg\") pod \"octavia-operator-controller-manager-5d86c7ddb7-jvb62\" (UID: \"70ce2fb0-509d-4f5a-aff5-8b71df9f78c4\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-jvb62" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.357975 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qpfz\" (UniqueName: \"kubernetes.io/projected/8120c80b-1df9-4534-b5c6-1ff42e7dd5f9-kube-api-access-6qpfz\") pod \"nova-operator-controller-manager-74b6b5dc96-bq9tz\" (UID: \"8120c80b-1df9-4534-b5c6-1ff42e7dd5f9\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-bq9tz" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.364208 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-jvb62" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.374711 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9q52\" (UniqueName: \"kubernetes.io/projected/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-kube-api-access-b9q52\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9chxvph\" (UID: \"87494b7e-5ff9-4bbf-b2b6-848c5d9269dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.376838 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66fr2\" (UniqueName: \"kubernetes.io/projected/7cbe4d2e-bd57-452d-b873-709e1de024e7-kube-api-access-66fr2\") pod \"placement-operator-controller-manager-648564c9fc-bvts5\" (UID: \"7cbe4d2e-bd57-452d-b873-709e1de024e7\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-bvts5" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.377332 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-rb9hs"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.380148 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-rb9hs" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.383024 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-w9tvz" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.388796 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-bvts5" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.407332 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-h9cbz" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.407678 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-rb9hs"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.417487 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtv2g\" (UniqueName: \"kubernetes.io/projected/2691a6d3-7eae-4d8c-b2e4-2157b87f0766-kube-api-access-xtv2g\") pod \"swift-operator-controller-manager-9b9ff9f4d-xzqdb\" (UID: \"2691a6d3-7eae-4d8c-b2e4-2157b87f0766\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-xzqdb" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.440536 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-tjvzl" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.456182 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-w46z2"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.457174 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-w46z2" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.462447 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-hs2mv" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.494684 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-w46z2"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.512794 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-9pvd5"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.513651 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-9pvd5" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.522753 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-569w2" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.523242 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-9pvd5"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.523788 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtv2g\" (UniqueName: \"kubernetes.io/projected/2691a6d3-7eae-4d8c-b2e4-2157b87f0766-kube-api-access-xtv2g\") pod \"swift-operator-controller-manager-9b9ff9f4d-xzqdb\" (UID: \"2691a6d3-7eae-4d8c-b2e4-2157b87f0766\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-xzqdb" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.523848 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sln4q\" (UniqueName: \"kubernetes.io/projected/93a2ac79-08d9-4559-a954-bdf0b2eb4dab-kube-api-access-sln4q\") pod \"telemetry-operator-controller-manager-5fdb694969-rb9hs\" (UID: \"93a2ac79-08d9-4559-a954-bdf0b2eb4dab\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-rb9hs" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.554804 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.558360 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.559398 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.564860 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtv2g\" (UniqueName: \"kubernetes.io/projected/2691a6d3-7eae-4d8c-b2e4-2157b87f0766-kube-api-access-xtv2g\") pod \"swift-operator-controller-manager-9b9ff9f4d-xzqdb\" (UID: \"2691a6d3-7eae-4d8c-b2e4-2157b87f0766\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-xzqdb" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.565101 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.566385 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.566479 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-dcv7j" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.581190 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-sblkk" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.596205 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-k2kn8" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.602132 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wlwb7"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.603191 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wlwb7" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.609476 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-zdtpl" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.620424 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wlwb7"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.628121 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2dmj\" (UniqueName: \"kubernetes.io/projected/1ac62c06-bfa2-435e-a497-7d0ce40f0fd4-kube-api-access-h2dmj\") pod \"watcher-operator-controller-manager-bccc79885-9pvd5\" (UID: \"1ac62c06-bfa2-435e-a497-7d0ce40f0fd4\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-9pvd5" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.628154 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.628198 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m874t\" (UniqueName: \"kubernetes.io/projected/d5f991dd-8062-43a8-8725-7a60c5a27a14-kube-api-access-m874t\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.628314 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.628387 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-r27km\" (UID: \"2042b3a5-c802-49c2-911b-b28eb19aecf5\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.628425 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c68b\" (UniqueName: \"kubernetes.io/projected/e210121e-ac51-4667-9bbc-7080ed583a49-kube-api-access-8c68b\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wlwb7\" (UID: \"e210121e-ac51-4667-9bbc-7080ed583a49\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wlwb7" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.628514 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sln4q\" (UniqueName: \"kubernetes.io/projected/93a2ac79-08d9-4559-a954-bdf0b2eb4dab-kube-api-access-sln4q\") pod \"telemetry-operator-controller-manager-5fdb694969-rb9hs\" (UID: \"93a2ac79-08d9-4559-a954-bdf0b2eb4dab\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-rb9hs" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.628636 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2z9m\" (UniqueName: \"kubernetes.io/projected/f28d8002-92dc-43b8-a2d5-858fd350c18c-kube-api-access-x2z9m\") pod \"test-operator-controller-manager-55b5ff4dbb-w46z2\" (UID: \"f28d8002-92dc-43b8-a2d5-858fd350c18c\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-w46z2" Feb 27 19:04:18 crc kubenswrapper[4981]: E0227 19:04:18.628705 4981 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 19:04:18 crc kubenswrapper[4981]: E0227 19:04:18.628764 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert podName:2042b3a5-c802-49c2-911b-b28eb19aecf5 nodeName:}" failed. No retries permitted until 2026-02-27 19:04:19.62874546 +0000 UTC m=+1159.107526620 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert") pod "infra-operator-controller-manager-f7fcc58b9-r27km" (UID: "2042b3a5-c802-49c2-911b-b28eb19aecf5") : secret "infra-operator-webhook-server-cert" not found Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.632434 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-bq9tz" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.659413 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-jtd4l"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.659508 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sln4q\" (UniqueName: \"kubernetes.io/projected/93a2ac79-08d9-4559-a954-bdf0b2eb4dab-kube-api-access-sln4q\") pod \"telemetry-operator-controller-manager-5fdb694969-rb9hs\" (UID: \"93a2ac79-08d9-4559-a954-bdf0b2eb4dab\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-rb9hs" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.732562 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.732629 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c68b\" (UniqueName: \"kubernetes.io/projected/e210121e-ac51-4667-9bbc-7080ed583a49-kube-api-access-8c68b\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wlwb7\" (UID: \"e210121e-ac51-4667-9bbc-7080ed583a49\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wlwb7" Feb 27 19:04:18 crc kubenswrapper[4981]: E0227 19:04:18.732659 4981 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 19:04:18 crc kubenswrapper[4981]: E0227 19:04:18.732714 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs podName:d5f991dd-8062-43a8-8725-7a60c5a27a14 nodeName:}" failed. No retries permitted until 2026-02-27 19:04:19.232698743 +0000 UTC m=+1158.711479903 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs") pod "openstack-operator-controller-manager-67c78cbb8b-dmjqm" (UID: "d5f991dd-8062-43a8-8725-7a60c5a27a14") : secret "metrics-server-cert" not found Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.732754 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2z9m\" (UniqueName: \"kubernetes.io/projected/f28d8002-92dc-43b8-a2d5-858fd350c18c-kube-api-access-x2z9m\") pod \"test-operator-controller-manager-55b5ff4dbb-w46z2\" (UID: \"f28d8002-92dc-43b8-a2d5-858fd350c18c\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-w46z2" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.732778 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2dmj\" (UniqueName: \"kubernetes.io/projected/1ac62c06-bfa2-435e-a497-7d0ce40f0fd4-kube-api-access-h2dmj\") pod \"watcher-operator-controller-manager-bccc79885-9pvd5\" (UID: \"1ac62c06-bfa2-435e-a497-7d0ce40f0fd4\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-9pvd5" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.732797 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.732850 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m874t\" (UniqueName: \"kubernetes.io/projected/d5f991dd-8062-43a8-8725-7a60c5a27a14-kube-api-access-m874t\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:18 crc kubenswrapper[4981]: E0227 19:04:18.733028 4981 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 19:04:18 crc kubenswrapper[4981]: E0227 19:04:18.733085 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs podName:d5f991dd-8062-43a8-8725-7a60c5a27a14 nodeName:}" failed. No retries permitted until 2026-02-27 19:04:19.233076104 +0000 UTC m=+1158.711857254 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs") pod "openstack-operator-controller-manager-67c78cbb8b-dmjqm" (UID: "d5f991dd-8062-43a8-8725-7a60c5a27a14") : secret "webhook-server-cert" not found Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.741378 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-xzqdb" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.761816 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-rb9hs" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.762651 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2dmj\" (UniqueName: \"kubernetes.io/projected/1ac62c06-bfa2-435e-a497-7d0ce40f0fd4-kube-api-access-h2dmj\") pod \"watcher-operator-controller-manager-bccc79885-9pvd5\" (UID: \"1ac62c06-bfa2-435e-a497-7d0ce40f0fd4\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-9pvd5" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.763220 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m874t\" (UniqueName: \"kubernetes.io/projected/d5f991dd-8062-43a8-8725-7a60c5a27a14-kube-api-access-m874t\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.763642 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c68b\" (UniqueName: \"kubernetes.io/projected/e210121e-ac51-4667-9bbc-7080ed583a49-kube-api-access-8c68b\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wlwb7\" (UID: \"e210121e-ac51-4667-9bbc-7080ed583a49\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wlwb7" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.770672 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2z9m\" (UniqueName: \"kubernetes.io/projected/f28d8002-92dc-43b8-a2d5-858fd350c18c-kube-api-access-x2z9m\") pod \"test-operator-controller-manager-55b5ff4dbb-w46z2\" (UID: \"f28d8002-92dc-43b8-a2d5-858fd350c18c\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-w46z2" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.800508 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-w46z2" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.800607 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-9fqts"] Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.834282 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9chxvph\" (UID: \"87494b7e-5ff9-4bbf-b2b6-848c5d9269dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" Feb 27 19:04:18 crc kubenswrapper[4981]: E0227 19:04:18.834477 4981 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 19:04:18 crc kubenswrapper[4981]: E0227 19:04:18.834560 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert podName:87494b7e-5ff9-4bbf-b2b6-848c5d9269dc nodeName:}" failed. No retries permitted until 2026-02-27 19:04:19.834542011 +0000 UTC m=+1159.313323171 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" (UID: "87494b7e-5ff9-4bbf-b2b6-848c5d9269dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.846227 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-9pvd5" Feb 27 19:04:18 crc kubenswrapper[4981]: I0227 19:04:18.978984 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wlwb7" Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.096816 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-59gjc"] Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.229406 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536984-jjx78" Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.253878 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.256417 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:19 crc kubenswrapper[4981]: E0227 19:04:19.256567 4981 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 19:04:19 crc kubenswrapper[4981]: E0227 19:04:19.256614 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs podName:d5f991dd-8062-43a8-8725-7a60c5a27a14 nodeName:}" failed. No retries permitted until 2026-02-27 19:04:20.256598955 +0000 UTC m=+1159.735380115 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs") pod "openstack-operator-controller-manager-67c78cbb8b-dmjqm" (UID: "d5f991dd-8062-43a8-8725-7a60c5a27a14") : secret "webhook-server-cert" not found Feb 27 19:04:19 crc kubenswrapper[4981]: E0227 19:04:19.259125 4981 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 19:04:19 crc kubenswrapper[4981]: E0227 19:04:19.259188 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs podName:d5f991dd-8062-43a8-8725-7a60c5a27a14 nodeName:}" failed. No retries permitted until 2026-02-27 19:04:20.259168514 +0000 UTC m=+1159.737949664 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs") pod "openstack-operator-controller-manager-67c78cbb8b-dmjqm" (UID: "d5f991dd-8062-43a8-8725-7a60c5a27a14") : secret "metrics-server-cert" not found Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.330416 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-6fkss"] Feb 27 19:04:19 crc kubenswrapper[4981]: W0227 19:04:19.333793 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda21f22e5_6cc2_43cc_890c_c9e42d8b12c5.slice/crio-e96fd7dda6501e39102f335c1fe15b67402bdffc0aba973d1e260013f506698e WatchSource:0}: Error finding container e96fd7dda6501e39102f335c1fe15b67402bdffc0aba973d1e260013f506698e: Status 404 returned error can't find the container with id e96fd7dda6501e39102f335c1fe15b67402bdffc0aba973d1e260013f506698e Feb 27 19:04:19 crc kubenswrapper[4981]: W0227 19:04:19.335008 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67789b9f_79ac_4901_8acc_22a86fb876c4.slice/crio-f00425b0170a84eb02809fbf101bc8aedc421651b2d27cfc2a90c752ac304440 WatchSource:0}: Error finding container f00425b0170a84eb02809fbf101bc8aedc421651b2d27cfc2a90c752ac304440: Status 404 returned error can't find the container with id f00425b0170a84eb02809fbf101bc8aedc421651b2d27cfc2a90c752ac304440 Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.335604 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-6kvkg"] Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.358011 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx4rf\" (UniqueName: \"kubernetes.io/projected/cce3f7fc-3761-458a-91ed-53ff41805400-kube-api-access-qx4rf\") pod \"cce3f7fc-3761-458a-91ed-53ff41805400\" (UID: \"cce3f7fc-3761-458a-91ed-53ff41805400\") " Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.360862 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-dhczx"] Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.366517 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cce3f7fc-3761-458a-91ed-53ff41805400-kube-api-access-qx4rf" (OuterVolumeSpecName: "kube-api-access-qx4rf") pod "cce3f7fc-3761-458a-91ed-53ff41805400" (UID: "cce3f7fc-3761-458a-91ed-53ff41805400"). InnerVolumeSpecName "kube-api-access-qx4rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.461580 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx4rf\" (UniqueName: \"kubernetes.io/projected/cce3f7fc-3761-458a-91ed-53ff41805400-kube-api-access-qx4rf\") on node \"crc\" DevicePath \"\"" Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.612811 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55ffd4876b-sblkk"] Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.620141 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-bq9tz"] Feb 27 19:04:19 crc kubenswrapper[4981]: W0227 19:04:19.625734 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96d76f06_213f_4b51_9dfa_7e77c5b97174.slice/crio-87e1afe97fc803cfe6633b0ed99ec61d897bb0bf4b25536abf508a0b729630e8 WatchSource:0}: Error finding container 87e1afe97fc803cfe6633b0ed99ec61d897bb0bf4b25536abf508a0b729630e8: Status 404 returned error can't find the container with id 87e1afe97fc803cfe6633b0ed99ec61d897bb0bf4b25536abf508a0b729630e8 Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.626761 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-556b8b874-k2kn8"] Feb 27 19:04:19 crc kubenswrapper[4981]: W0227 19:04:19.636981 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cbe4d2e_bd57_452d_b873_709e1de024e7.slice/crio-6d5ed310ad9e99e4f1560feb493e2b0081e8bfe06833716f418bd0fd6729cc7b WatchSource:0}: Error finding container 6d5ed310ad9e99e4f1560feb493e2b0081e8bfe06833716f418bd0fd6729cc7b: Status 404 returned error can't find the container with id 6d5ed310ad9e99e4f1560feb493e2b0081e8bfe06833716f418bd0fd6729cc7b Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.651221 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-bq9tz" event={"ID":"8120c80b-1df9-4534-b5c6-1ff42e7dd5f9","Type":"ContainerStarted","Data":"f4d769a1e5addac009386e964be5ecfde7ed3f00812d1bbf5f28c6bb27076281"} Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.651265 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-bvts5"] Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.651282 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kvkg" event={"ID":"5e144a53-3c1b-49db-9f08-d93ebe9fb576","Type":"ContainerStarted","Data":"845f12f5cc0bcdc944e7c03a36d9c31dda3b3895a441430ceab3870bb9724dbe"} Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.651295 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-xzqdb"] Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.651306 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-jtd4l" event={"ID":"fdf56547-b3d3-4481-acea-493c4ea4b2d9","Type":"ContainerStarted","Data":"b44c96753eea4551e2a840a1989a01c8ea1fcc2d4e6ea5346718e5c2856ba6bc"} Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.651316 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-6fkss" event={"ID":"67789b9f-79ac-4901-8acc-22a86fb876c4","Type":"ContainerStarted","Data":"f00425b0170a84eb02809fbf101bc8aedc421651b2d27cfc2a90c752ac304440"} Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.651465 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-dhczx" event={"ID":"a21f22e5-6cc2-43cc-890c-c9e42d8b12c5","Type":"ContainerStarted","Data":"e96fd7dda6501e39102f335c1fe15b67402bdffc0aba973d1e260013f506698e"} Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.653157 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-tjvzl"] Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.657026 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536984-jjx78" event={"ID":"cce3f7fc-3761-458a-91ed-53ff41805400","Type":"ContainerDied","Data":"3cbb7e0e4d0fb683fc88b45577478b1a779ad461070923fce087d05f577c1404"} Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.657075 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536984-jjx78" Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.657093 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cbb7e0e4d0fb683fc88b45577478b1a779ad461070923fce087d05f577c1404" Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.657938 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-9fqts" event={"ID":"7a1c1676-014d-4de0-ab20-a951ad5bb7fe","Type":"ContainerStarted","Data":"50c83fd18743e00bb7469648a4992cf8174846aabe5a71cf5e576531518db143"} Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.658579 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-rb9hs"] Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.658794 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-59gjc" event={"ID":"2741c246-6bf8-411d-bdd7-29cb20588c0c","Type":"ContainerStarted","Data":"b45cfd4ce7b567f8d62b4185434b1fc26ceddbd6dd766c6cc30644d5604eb391"} Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.659526 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-k2kn8" event={"ID":"e1c487e5-53af-41ef-8713-87d17ab9632d","Type":"ContainerStarted","Data":"6e8515d893de4b7aec7f1ac429d3e5e3b14edaa450a9203ad8c700640ce7dc33"} Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.663404 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-r27km\" (UID: \"2042b3a5-c802-49c2-911b-b28eb19aecf5\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km" Feb 27 19:04:19 crc kubenswrapper[4981]: E0227 19:04:19.664343 4981 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 19:04:19 crc kubenswrapper[4981]: E0227 19:04:19.664411 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert podName:2042b3a5-c802-49c2-911b-b28eb19aecf5 nodeName:}" failed. No retries permitted until 2026-02-27 19:04:21.664390944 +0000 UTC m=+1161.143172104 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert") pod "infra-operator-controller-manager-f7fcc58b9-r27km" (UID: "2042b3a5-c802-49c2-911b-b28eb19aecf5") : secret "infra-operator-webhook-server-cert" not found Feb 27 19:04:19 crc kubenswrapper[4981]: W0227 19:04:19.759591 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cd1d521_194a_48fa_9412_a95ff0c2c598.slice/crio-86ba2f22cad6e16c17f5d2d2496c620a54984f6cedaf9bdbc5b1a1d3c83d814d WatchSource:0}: Error finding container 86ba2f22cad6e16c17f5d2d2496c620a54984f6cedaf9bdbc5b1a1d3c83d814d: Status 404 returned error can't find the container with id 86ba2f22cad6e16c17f5d2d2496c620a54984f6cedaf9bdbc5b1a1d3c83d814d Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.760282 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-h9cbz"] Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.770165 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-sbs2q"] Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.774793 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-6xfvc"] Feb 27 19:04:19 crc kubenswrapper[4981]: W0227 19:04:19.776774 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12a33549_4f35_4fe1_851c_21e46a44dff6.slice/crio-1d1ce83ede7804733869e793a0831bfed898d2289a0c9634b999844ae3afe6ce WatchSource:0}: Error finding container 1d1ce83ede7804733869e793a0831bfed898d2289a0c9634b999844ae3afe6ce: Status 404 returned error can't find the container with id 1d1ce83ede7804733869e793a0831bfed898d2289a0c9634b999844ae3afe6ce Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.789562 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-jvb62"] Feb 27 19:04:19 crc kubenswrapper[4981]: E0227 19:04:19.791036 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5cflq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-54688575f-6xfvc_openstack-operators(1636f598-89d5-474c-85a9-69ea06f889de): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 19:04:19 crc kubenswrapper[4981]: E0227 19:04:19.792998 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-6xfvc" podUID="1636f598-89d5-474c-85a9-69ea06f889de" Feb 27 19:04:19 crc kubenswrapper[4981]: W0227 19:04:19.797070 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70ce2fb0_509d_4f5a_aff5_8b71df9f78c4.slice/crio-4bebf545fe6b7141c9b81b771b79d3e5ca87cb84cdc02f2adbe11d8a2a159000 WatchSource:0}: Error finding container 4bebf545fe6b7141c9b81b771b79d3e5ca87cb84cdc02f2adbe11d8a2a159000: Status 404 returned error can't find the container with id 4bebf545fe6b7141c9b81b771b79d3e5ca87cb84cdc02f2adbe11d8a2a159000 Feb 27 19:04:19 crc kubenswrapper[4981]: E0227 19:04:19.802722 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fb6rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5d86c7ddb7-jvb62_openstack-operators(70ce2fb0-509d-4f5a-aff5-8b71df9f78c4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 19:04:19 crc kubenswrapper[4981]: E0227 19:04:19.806134 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-jvb62" podUID="70ce2fb0-509d-4f5a-aff5-8b71df9f78c4" Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.812326 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-w46z2"] Feb 27 19:04:19 crc kubenswrapper[4981]: W0227 19:04:19.814224 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf28d8002_92dc_43b8_a2d5_858fd350c18c.slice/crio-7a969b9c24e55fc147cb1f9ae4da30b5fe3292179242945f2f2936b3f98f58d0 WatchSource:0}: Error finding container 7a969b9c24e55fc147cb1f9ae4da30b5fe3292179242945f2f2936b3f98f58d0: Status 404 returned error can't find the container with id 7a969b9c24e55fc147cb1f9ae4da30b5fe3292179242945f2f2936b3f98f58d0 Feb 27 19:04:19 crc kubenswrapper[4981]: E0227 19:04:19.816967 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x2z9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-55b5ff4dbb-w46z2_openstack-operators(f28d8002-92dc-43b8-a2d5-858fd350c18c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 19:04:19 crc kubenswrapper[4981]: E0227 19:04:19.818104 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-w46z2" podUID="f28d8002-92dc-43b8-a2d5-858fd350c18c" Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.852770 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-9pvd5"] Feb 27 19:04:19 crc kubenswrapper[4981]: E0227 19:04:19.857676 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h2dmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-bccc79885-9pvd5_openstack-operators(1ac62c06-bfa2-435e-a497-7d0ce40f0fd4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 19:04:19 crc kubenswrapper[4981]: W0227 19:04:19.858766 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode210121e_ac51_4667_9bbc_7080ed583a49.slice/crio-ee8d41e10835e45135f34325879928ee122a35690be519a84bfaafe2c94b315b WatchSource:0}: Error finding container ee8d41e10835e45135f34325879928ee122a35690be519a84bfaafe2c94b315b: Status 404 returned error can't find the container with id ee8d41e10835e45135f34325879928ee122a35690be519a84bfaafe2c94b315b Feb 27 19:04:19 crc kubenswrapper[4981]: E0227 19:04:19.858827 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-9pvd5" podUID="1ac62c06-bfa2-435e-a497-7d0ce40f0fd4" Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.861728 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wlwb7"] Feb 27 19:04:19 crc kubenswrapper[4981]: I0227 19:04:19.865950 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9chxvph\" (UID: \"87494b7e-5ff9-4bbf-b2b6-848c5d9269dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" Feb 27 19:04:19 crc kubenswrapper[4981]: E0227 19:04:19.865930 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8c68b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-wlwb7_openstack-operators(e210121e-ac51-4667-9bbc-7080ed583a49): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 19:04:19 crc kubenswrapper[4981]: E0227 19:04:19.866292 4981 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 19:04:19 crc kubenswrapper[4981]: E0227 19:04:19.866369 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert podName:87494b7e-5ff9-4bbf-b2b6-848c5d9269dc nodeName:}" failed. No retries permitted until 2026-02-27 19:04:21.866346779 +0000 UTC m=+1161.345127949 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" (UID: "87494b7e-5ff9-4bbf-b2b6-848c5d9269dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 19:04:19 crc kubenswrapper[4981]: E0227 19:04:19.867973 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wlwb7" podUID="e210121e-ac51-4667-9bbc-7080ed583a49" Feb 27 19:04:20 crc kubenswrapper[4981]: I0227 19:04:20.249734 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:04:20 crc kubenswrapper[4981]: I0227 19:04:20.249819 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:04:20 crc kubenswrapper[4981]: I0227 19:04:20.272182 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:20 crc kubenswrapper[4981]: I0227 19:04:20.272298 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:20 crc kubenswrapper[4981]: E0227 19:04:20.272764 4981 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 19:04:20 crc kubenswrapper[4981]: E0227 19:04:20.272829 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs podName:d5f991dd-8062-43a8-8725-7a60c5a27a14 nodeName:}" failed. No retries permitted until 2026-02-27 19:04:22.272811455 +0000 UTC m=+1161.751592615 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs") pod "openstack-operator-controller-manager-67c78cbb8b-dmjqm" (UID: "d5f991dd-8062-43a8-8725-7a60c5a27a14") : secret "metrics-server-cert" not found Feb 27 19:04:20 crc kubenswrapper[4981]: E0227 19:04:20.272967 4981 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 19:04:20 crc kubenswrapper[4981]: E0227 19:04:20.273107 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs podName:d5f991dd-8062-43a8-8725-7a60c5a27a14 nodeName:}" failed. No retries permitted until 2026-02-27 19:04:22.273070673 +0000 UTC m=+1161.751851833 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs") pod "openstack-operator-controller-manager-67c78cbb8b-dmjqm" (UID: "d5f991dd-8062-43a8-8725-7a60c5a27a14") : secret "webhook-server-cert" not found Feb 27 19:04:20 crc kubenswrapper[4981]: I0227 19:04:20.287623 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536978-drs27"] Feb 27 19:04:20 crc kubenswrapper[4981]: I0227 19:04:20.292621 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536978-drs27"] Feb 27 19:04:20 crc kubenswrapper[4981]: I0227 19:04:20.667815 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-bvts5" event={"ID":"7cbe4d2e-bd57-452d-b873-709e1de024e7","Type":"ContainerStarted","Data":"6d5ed310ad9e99e4f1560feb493e2b0081e8bfe06833716f418bd0fd6729cc7b"} Feb 27 19:04:20 crc kubenswrapper[4981]: I0227 19:04:20.669375 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-6xfvc" event={"ID":"1636f598-89d5-474c-85a9-69ea06f889de","Type":"ContainerStarted","Data":"0b77f65c283389d06761ae3088f5565e9df01e996e6ff1464d6faf46e5ff4e32"} Feb 27 19:04:20 crc kubenswrapper[4981]: E0227 19:04:20.671952 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-6xfvc" podUID="1636f598-89d5-474c-85a9-69ea06f889de" Feb 27 19:04:20 crc kubenswrapper[4981]: I0227 19:04:20.672805 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-sblkk" event={"ID":"96d76f06-213f-4b51-9dfa-7e77c5b97174","Type":"ContainerStarted","Data":"87e1afe97fc803cfe6633b0ed99ec61d897bb0bf4b25536abf508a0b729630e8"} Feb 27 19:04:20 crc kubenswrapper[4981]: I0227 19:04:20.673978 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-sbs2q" event={"ID":"12a33549-4f35-4fe1-851c-21e46a44dff6","Type":"ContainerStarted","Data":"1d1ce83ede7804733869e793a0831bfed898d2289a0c9634b999844ae3afe6ce"} Feb 27 19:04:20 crc kubenswrapper[4981]: I0227 19:04:20.675317 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-9pvd5" event={"ID":"1ac62c06-bfa2-435e-a497-7d0ce40f0fd4","Type":"ContainerStarted","Data":"ecec9468973a3a661bbc3be07fbf36c9ea84dc7ea49e741662ebe3c72fa6e731"} Feb 27 19:04:20 crc kubenswrapper[4981]: E0227 19:04:20.676317 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-9pvd5" podUID="1ac62c06-bfa2-435e-a497-7d0ce40f0fd4" Feb 27 19:04:20 crc kubenswrapper[4981]: I0227 19:04:20.676745 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-jvb62" event={"ID":"70ce2fb0-509d-4f5a-aff5-8b71df9f78c4","Type":"ContainerStarted","Data":"4bebf545fe6b7141c9b81b771b79d3e5ca87cb84cdc02f2adbe11d8a2a159000"} Feb 27 19:04:20 crc kubenswrapper[4981]: I0227 19:04:20.679385 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wlwb7" event={"ID":"e210121e-ac51-4667-9bbc-7080ed583a49","Type":"ContainerStarted","Data":"ee8d41e10835e45135f34325879928ee122a35690be519a84bfaafe2c94b315b"} Feb 27 19:04:20 crc kubenswrapper[4981]: I0227 19:04:20.683984 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-xzqdb" event={"ID":"2691a6d3-7eae-4d8c-b2e4-2157b87f0766","Type":"ContainerStarted","Data":"bd9789948b93edd2856bc21ca5f418923b28fd3dfdd31e5a7d57516a0e2f86cb"} Feb 27 19:04:20 crc kubenswrapper[4981]: I0227 19:04:20.693386 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-w46z2" event={"ID":"f28d8002-92dc-43b8-a2d5-858fd350c18c","Type":"ContainerStarted","Data":"7a969b9c24e55fc147cb1f9ae4da30b5fe3292179242945f2f2936b3f98f58d0"} Feb 27 19:04:20 crc kubenswrapper[4981]: E0227 19:04:20.693570 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-jvb62" podUID="70ce2fb0-509d-4f5a-aff5-8b71df9f78c4" Feb 27 19:04:20 crc kubenswrapper[4981]: E0227 19:04:20.694018 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wlwb7" podUID="e210121e-ac51-4667-9bbc-7080ed583a49" Feb 27 19:04:20 crc kubenswrapper[4981]: E0227 19:04:20.694910 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968\\\"\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-w46z2" podUID="f28d8002-92dc-43b8-a2d5-858fd350c18c" Feb 27 19:04:20 crc kubenswrapper[4981]: I0227 19:04:20.698287 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-tjvzl" event={"ID":"2cd1d521-194a-48fa-9412-a95ff0c2c598","Type":"ContainerStarted","Data":"86ba2f22cad6e16c17f5d2d2496c620a54984f6cedaf9bdbc5b1a1d3c83d814d"} Feb 27 19:04:20 crc kubenswrapper[4981]: I0227 19:04:20.707036 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-h9cbz" event={"ID":"e9987372-8f11-4038-939a-75d1152e5667","Type":"ContainerStarted","Data":"b6e83ddaf005c3e1f611d2ca9f07c2140cda1056d5654b3f79a6fcbd6ce26a53"} Feb 27 19:04:20 crc kubenswrapper[4981]: I0227 19:04:20.710963 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-rb9hs" event={"ID":"93a2ac79-08d9-4559-a954-bdf0b2eb4dab","Type":"ContainerStarted","Data":"8727051e27d2fdcbfdaa6d8884f8b8e9f00830f91e1c80f51a144fc3a7d838f4"} Feb 27 19:04:21 crc kubenswrapper[4981]: I0227 19:04:21.642039 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7acbd9d5-f113-4fdc-8ee8-02a2df5d840e" path="/var/lib/kubelet/pods/7acbd9d5-f113-4fdc-8ee8-02a2df5d840e/volumes" Feb 27 19:04:21 crc kubenswrapper[4981]: I0227 19:04:21.720854 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-r27km\" (UID: \"2042b3a5-c802-49c2-911b-b28eb19aecf5\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km" Feb 27 19:04:21 crc kubenswrapper[4981]: E0227 19:04:21.721089 4981 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 19:04:21 crc kubenswrapper[4981]: E0227 19:04:21.721147 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert podName:2042b3a5-c802-49c2-911b-b28eb19aecf5 nodeName:}" failed. No retries permitted until 2026-02-27 19:04:25.721132727 +0000 UTC m=+1165.199913887 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert") pod "infra-operator-controller-manager-f7fcc58b9-r27km" (UID: "2042b3a5-c802-49c2-911b-b28eb19aecf5") : secret "infra-operator-webhook-server-cert" not found Feb 27 19:04:21 crc kubenswrapper[4981]: E0227 19:04:21.732935 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-jvb62" podUID="70ce2fb0-509d-4f5a-aff5-8b71df9f78c4" Feb 27 19:04:21 crc kubenswrapper[4981]: E0227 19:04:21.732936 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-6xfvc" podUID="1636f598-89d5-474c-85a9-69ea06f889de" Feb 27 19:04:21 crc kubenswrapper[4981]: E0227 19:04:21.733009 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wlwb7" podUID="e210121e-ac51-4667-9bbc-7080ed583a49" Feb 27 19:04:21 crc kubenswrapper[4981]: E0227 19:04:21.733100 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968\\\"\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-w46z2" podUID="f28d8002-92dc-43b8-a2d5-858fd350c18c" Feb 27 19:04:21 crc kubenswrapper[4981]: E0227 19:04:21.744927 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-9pvd5" podUID="1ac62c06-bfa2-435e-a497-7d0ce40f0fd4" Feb 27 19:04:21 crc kubenswrapper[4981]: I0227 19:04:21.924386 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9chxvph\" (UID: \"87494b7e-5ff9-4bbf-b2b6-848c5d9269dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" Feb 27 19:04:21 crc kubenswrapper[4981]: E0227 19:04:21.924990 4981 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 19:04:21 crc kubenswrapper[4981]: E0227 19:04:21.925065 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert podName:87494b7e-5ff9-4bbf-b2b6-848c5d9269dc nodeName:}" failed. No retries permitted until 2026-02-27 19:04:25.925034161 +0000 UTC m=+1165.403815321 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" (UID: "87494b7e-5ff9-4bbf-b2b6-848c5d9269dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 19:04:22 crc kubenswrapper[4981]: I0227 19:04:22.330566 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:22 crc kubenswrapper[4981]: I0227 19:04:22.330673 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:22 crc kubenswrapper[4981]: E0227 19:04:22.330815 4981 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 19:04:22 crc kubenswrapper[4981]: E0227 19:04:22.330865 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs podName:d5f991dd-8062-43a8-8725-7a60c5a27a14 nodeName:}" failed. No retries permitted until 2026-02-27 19:04:26.330850119 +0000 UTC m=+1165.809631279 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs") pod "openstack-operator-controller-manager-67c78cbb8b-dmjqm" (UID: "d5f991dd-8062-43a8-8725-7a60c5a27a14") : secret "webhook-server-cert" not found Feb 27 19:04:22 crc kubenswrapper[4981]: E0227 19:04:22.331182 4981 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 19:04:22 crc kubenswrapper[4981]: E0227 19:04:22.331274 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs podName:d5f991dd-8062-43a8-8725-7a60c5a27a14 nodeName:}" failed. No retries permitted until 2026-02-27 19:04:26.331255691 +0000 UTC m=+1165.810036851 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs") pod "openstack-operator-controller-manager-67c78cbb8b-dmjqm" (UID: "d5f991dd-8062-43a8-8725-7a60c5a27a14") : secret "metrics-server-cert" not found Feb 27 19:04:25 crc kubenswrapper[4981]: I0227 19:04:25.726486 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-r27km\" (UID: \"2042b3a5-c802-49c2-911b-b28eb19aecf5\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km" Feb 27 19:04:25 crc kubenswrapper[4981]: E0227 19:04:25.727850 4981 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 19:04:25 crc kubenswrapper[4981]: E0227 19:04:25.727894 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert podName:2042b3a5-c802-49c2-911b-b28eb19aecf5 nodeName:}" failed. No retries permitted until 2026-02-27 19:04:33.727879996 +0000 UTC m=+1173.206661156 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert") pod "infra-operator-controller-manager-f7fcc58b9-r27km" (UID: "2042b3a5-c802-49c2-911b-b28eb19aecf5") : secret "infra-operator-webhook-server-cert" not found Feb 27 19:04:25 crc kubenswrapper[4981]: I0227 19:04:25.930047 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9chxvph\" (UID: \"87494b7e-5ff9-4bbf-b2b6-848c5d9269dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" Feb 27 19:04:25 crc kubenswrapper[4981]: E0227 19:04:25.930216 4981 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 19:04:25 crc kubenswrapper[4981]: E0227 19:04:25.930541 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert podName:87494b7e-5ff9-4bbf-b2b6-848c5d9269dc nodeName:}" failed. No retries permitted until 2026-02-27 19:04:33.930523963 +0000 UTC m=+1173.409305123 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" (UID: "87494b7e-5ff9-4bbf-b2b6-848c5d9269dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 19:04:26 crc kubenswrapper[4981]: I0227 19:04:26.335240 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:26 crc kubenswrapper[4981]: E0227 19:04:26.335381 4981 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 19:04:26 crc kubenswrapper[4981]: I0227 19:04:26.335405 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:26 crc kubenswrapper[4981]: E0227 19:04:26.335451 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs podName:d5f991dd-8062-43a8-8725-7a60c5a27a14 nodeName:}" failed. No retries permitted until 2026-02-27 19:04:34.335432092 +0000 UTC m=+1173.814213252 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs") pod "openstack-operator-controller-manager-67c78cbb8b-dmjqm" (UID: "d5f991dd-8062-43a8-8725-7a60c5a27a14") : secret "metrics-server-cert" not found Feb 27 19:04:26 crc kubenswrapper[4981]: E0227 19:04:26.335579 4981 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 19:04:26 crc kubenswrapper[4981]: E0227 19:04:26.335645 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs podName:d5f991dd-8062-43a8-8725-7a60c5a27a14 nodeName:}" failed. No retries permitted until 2026-02-27 19:04:34.335629008 +0000 UTC m=+1173.814410168 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs") pod "openstack-operator-controller-manager-67c78cbb8b-dmjqm" (UID: "d5f991dd-8062-43a8-8725-7a60c5a27a14") : secret "webhook-server-cert" not found Feb 27 19:04:32 crc kubenswrapper[4981]: I0227 19:04:32.630634 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:04:33 crc kubenswrapper[4981]: I0227 19:04:33.739632 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-r27km\" (UID: \"2042b3a5-c802-49c2-911b-b28eb19aecf5\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km" Feb 27 19:04:33 crc kubenswrapper[4981]: E0227 19:04:33.739915 4981 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 19:04:33 crc kubenswrapper[4981]: E0227 19:04:33.739994 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert podName:2042b3a5-c802-49c2-911b-b28eb19aecf5 nodeName:}" failed. No retries permitted until 2026-02-27 19:04:49.739969512 +0000 UTC m=+1189.218750712 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert") pod "infra-operator-controller-manager-f7fcc58b9-r27km" (UID: "2042b3a5-c802-49c2-911b-b28eb19aecf5") : secret "infra-operator-webhook-server-cert" not found Feb 27 19:04:33 crc kubenswrapper[4981]: E0227 19:04:33.882761 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:ee642fcf655f9897d480460008cba2e98b497d3ffdf7ab1d48ea460eb20c2053" Feb 27 19:04:33 crc kubenswrapper[4981]: E0227 19:04:33.883277 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:ee642fcf655f9897d480460008cba2e98b497d3ffdf7ab1d48ea460eb20c2053,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jlnhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-cf99c678f-59gjc_openstack-operators(2741c246-6bf8-411d-bdd7-29cb20588c0c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:04:33 crc kubenswrapper[4981]: E0227 19:04:33.884677 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-59gjc" podUID="2741c246-6bf8-411d-bdd7-29cb20588c0c" Feb 27 19:04:33 crc kubenswrapper[4981]: I0227 19:04:33.942634 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9chxvph\" (UID: \"87494b7e-5ff9-4bbf-b2b6-848c5d9269dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" Feb 27 19:04:33 crc kubenswrapper[4981]: E0227 19:04:33.942955 4981 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 19:04:33 crc kubenswrapper[4981]: E0227 19:04:33.943122 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert podName:87494b7e-5ff9-4bbf-b2b6-848c5d9269dc nodeName:}" failed. No retries permitted until 2026-02-27 19:04:49.943083832 +0000 UTC m=+1189.421865032 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" (UID: "87494b7e-5ff9-4bbf-b2b6-848c5d9269dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 19:04:34 crc kubenswrapper[4981]: E0227 19:04:34.266446 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:ee642fcf655f9897d480460008cba2e98b497d3ffdf7ab1d48ea460eb20c2053\\\"\"" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-59gjc" podUID="2741c246-6bf8-411d-bdd7-29cb20588c0c" Feb 27 19:04:34 crc kubenswrapper[4981]: I0227 19:04:34.349023 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:34 crc kubenswrapper[4981]: I0227 19:04:34.349148 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:34 crc kubenswrapper[4981]: E0227 19:04:34.349366 4981 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 19:04:34 crc kubenswrapper[4981]: E0227 19:04:34.349432 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs podName:d5f991dd-8062-43a8-8725-7a60c5a27a14 nodeName:}" failed. No retries permitted until 2026-02-27 19:04:50.349414416 +0000 UTC m=+1189.828195576 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs") pod "openstack-operator-controller-manager-67c78cbb8b-dmjqm" (UID: "d5f991dd-8062-43a8-8725-7a60c5a27a14") : secret "metrics-server-cert" not found Feb 27 19:04:34 crc kubenswrapper[4981]: E0227 19:04:34.349805 4981 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 19:04:34 crc kubenswrapper[4981]: E0227 19:04:34.349843 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs podName:d5f991dd-8062-43a8-8725-7a60c5a27a14 nodeName:}" failed. No retries permitted until 2026-02-27 19:04:50.349833839 +0000 UTC m=+1189.828614999 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs") pod "openstack-operator-controller-manager-67c78cbb8b-dmjqm" (UID: "d5f991dd-8062-43a8-8725-7a60c5a27a14") : secret "webhook-server-cert" not found Feb 27 19:04:38 crc kubenswrapper[4981]: E0227 19:04:38.330909 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26" Feb 27 19:04:38 crc kubenswrapper[4981]: E0227 19:04:38.331461 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b7h5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-67d996989d-6fkss_openstack-operators(67789b9f-79ac-4901-8acc-22a86fb876c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:04:38 crc kubenswrapper[4981]: E0227 19:04:38.332668 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-6fkss" podUID="67789b9f-79ac-4901-8acc-22a86fb876c4" Feb 27 19:04:38 crc kubenswrapper[4981]: E0227 19:04:38.563378 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:f1158ec4d879c4646eee4323bc501eba4d377beb2ad6fbe08ed30070c441ac26\\\"\"" pod="openstack-operators/manila-operator-controller-manager-67d996989d-6fkss" podUID="67789b9f-79ac-4901-8acc-22a86fb876c4" Feb 27 19:04:39 crc kubenswrapper[4981]: E0227 19:04:39.006777 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:114c0dee0bab1d453890e9dcc7727de749055bdbea049384d5696e7ac8d78fe3" Feb 27 19:04:39 crc kubenswrapper[4981]: E0227 19:04:39.007975 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:114c0dee0bab1d453890e9dcc7727de749055bdbea049384d5696e7ac8d78fe3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kphpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-78bc7f9bd9-dhczx_openstack-operators(a21f22e5-6cc2-43cc-890c-c9e42d8b12c5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:04:39 crc kubenswrapper[4981]: E0227 19:04:39.009942 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-dhczx" podUID="a21f22e5-6cc2-43cc-890c-c9e42d8b12c5" Feb 27 19:04:39 crc kubenswrapper[4981]: E0227 19:04:39.551554 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:71f2ab3bb41d1743287a3270dd49e32192b347d8ba7353d2250cbd7e8528219b" Feb 27 19:04:39 crc kubenswrapper[4981]: E0227 19:04:39.551764 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:71f2ab3bb41d1743287a3270dd49e32192b347d8ba7353d2250cbd7e8528219b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wlzn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-556b8b874-k2kn8_openstack-operators(e1c487e5-53af-41ef-8713-87d17ab9632d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:04:39 crc kubenswrapper[4981]: E0227 19:04:39.553005 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-k2kn8" podUID="e1c487e5-53af-41ef-8713-87d17ab9632d" Feb 27 19:04:39 crc kubenswrapper[4981]: E0227 19:04:39.569690 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:114c0dee0bab1d453890e9dcc7727de749055bdbea049384d5696e7ac8d78fe3\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-dhczx" podUID="a21f22e5-6cc2-43cc-890c-c9e42d8b12c5" Feb 27 19:04:39 crc kubenswrapper[4981]: E0227 19:04:39.571492 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:71f2ab3bb41d1743287a3270dd49e32192b347d8ba7353d2250cbd7e8528219b\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-k2kn8" podUID="e1c487e5-53af-41ef-8713-87d17ab9632d" Feb 27 19:04:41 crc kubenswrapper[4981]: E0227 19:04:41.546155 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c" Feb 27 19:04:41 crc kubenswrapper[4981]: E0227 19:04:41.546692 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sq68p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-75684d597f-h9cbz_openstack-operators(e9987372-8f11-4038-939a-75d1152e5667): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:04:41 crc kubenswrapper[4981]: E0227 19:04:41.548274 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-h9cbz" podUID="e9987372-8f11-4038-939a-75d1152e5667" Feb 27 19:04:41 crc kubenswrapper[4981]: E0227 19:04:41.585460 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-h9cbz" podUID="e9987372-8f11-4038-939a-75d1152e5667" Feb 27 19:04:46 crc kubenswrapper[4981]: E0227 19:04:46.151952 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214" Feb 27 19:04:46 crc kubenswrapper[4981]: E0227 19:04:46.152446 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-66lww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-5d87c9d997-sbs2q_openstack-operators(12a33549-4f35-4fe1-851c-21e46a44dff6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:04:46 crc kubenswrapper[4981]: E0227 19:04:46.153659 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-sbs2q" podUID="12a33549-4f35-4fe1-851c-21e46a44dff6" Feb 27 19:04:46 crc kubenswrapper[4981]: E0227 19:04:46.625385 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214\\\"\"" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-sbs2q" podUID="12a33549-4f35-4fe1-851c-21e46a44dff6" Feb 27 19:04:49 crc kubenswrapper[4981]: I0227 19:04:49.750080 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-r27km\" (UID: \"2042b3a5-c802-49c2-911b-b28eb19aecf5\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km" Feb 27 19:04:49 crc kubenswrapper[4981]: I0227 19:04:49.764940 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2042b3a5-c802-49c2-911b-b28eb19aecf5-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-r27km\" (UID: \"2042b3a5-c802-49c2-911b-b28eb19aecf5\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km" Feb 27 19:04:49 crc kubenswrapper[4981]: I0227 19:04:49.952575 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9chxvph\" (UID: \"87494b7e-5ff9-4bbf-b2b6-848c5d9269dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" Feb 27 19:04:49 crc kubenswrapper[4981]: I0227 19:04:49.958818 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87494b7e-5ff9-4bbf-b2b6-848c5d9269dc-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9chxvph\" (UID: \"87494b7e-5ff9-4bbf-b2b6-848c5d9269dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" Feb 27 19:04:50 crc kubenswrapper[4981]: I0227 19:04:50.016400 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km" Feb 27 19:04:50 crc kubenswrapper[4981]: I0227 19:04:50.022259 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" Feb 27 19:04:50 crc kubenswrapper[4981]: I0227 19:04:50.248991 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:04:50 crc kubenswrapper[4981]: I0227 19:04:50.249114 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:04:50 crc kubenswrapper[4981]: I0227 19:04:50.359332 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:50 crc kubenswrapper[4981]: I0227 19:04:50.359476 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:50 crc kubenswrapper[4981]: I0227 19:04:50.365462 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-webhook-certs\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:50 crc kubenswrapper[4981]: I0227 19:04:50.365487 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5f991dd-8062-43a8-8725-7a60c5a27a14-metrics-certs\") pod \"openstack-operator-controller-manager-67c78cbb8b-dmjqm\" (UID: \"d5f991dd-8062-43a8-8725-7a60c5a27a14\") " pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:04:50 crc kubenswrapper[4981]: I0227 19:04:50.389192 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:05:02 crc kubenswrapper[4981]: I0227 19:05:02.972878 4981 scope.go:117] "RemoveContainer" containerID="610278bafe9213e09e992a980ac04a61586d101e3473c12e3fbaf4e3115972de" Feb 27 19:05:03 crc kubenswrapper[4981]: E0227 19:05:03.357673 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4" Feb 27 19:05:03 crc kubenswrapper[4981]: E0227 19:05:03.358187 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5cflq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-54688575f-6xfvc_openstack-operators(1636f598-89d5-474c-85a9-69ea06f889de): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:05:03 crc kubenswrapper[4981]: E0227 19:05:03.361492 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-6xfvc" podUID="1636f598-89d5-474c-85a9-69ea06f889de" Feb 27 19:05:03 crc kubenswrapper[4981]: E0227 19:05:03.416795 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84" Feb 27 19:05:03 crc kubenswrapper[4981]: E0227 19:05:03.417014 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6qpfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-74b6b5dc96-bq9tz_openstack-operators(8120c80b-1df9-4534-b5c6-1ff42e7dd5f9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:05:03 crc kubenswrapper[4981]: E0227 19:05:03.418256 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-bq9tz" podUID="8120c80b-1df9-4534-b5c6-1ff42e7dd5f9" Feb 27 19:05:03 crc kubenswrapper[4981]: E0227 19:05:03.771299 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84\\\"\"" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-bq9tz" podUID="8120c80b-1df9-4534-b5c6-1ff42e7dd5f9" Feb 27 19:05:04 crc kubenswrapper[4981]: E0227 19:05:04.215875 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:12fa31d2a2dfe1a832c6a2c0eb58876a3a62595a1a1f49b13c2a1f9b6d378735" Feb 27 19:05:04 crc kubenswrapper[4981]: E0227 19:05:04.216227 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:12fa31d2a2dfe1a832c6a2c0eb58876a3a62595a1a1f49b13c2a1f9b6d378735,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gsp9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-55ffd4876b-sblkk_openstack-operators(96d76f06-213f-4b51-9dfa-7e77c5b97174): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:05:04 crc kubenswrapper[4981]: E0227 19:05:04.220174 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-sblkk" podUID="96d76f06-213f-4b51-9dfa-7e77c5b97174" Feb 27 19:05:04 crc kubenswrapper[4981]: E0227 19:05:04.782083 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:12fa31d2a2dfe1a832c6a2c0eb58876a3a62595a1a1f49b13c2a1f9b6d378735\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-sblkk" podUID="96d76f06-213f-4b51-9dfa-7e77c5b97174" Feb 27 19:05:04 crc kubenswrapper[4981]: E0227 19:05:04.799252 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97" Feb 27 19:05:04 crc kubenswrapper[4981]: E0227 19:05:04.799643 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h2dmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-bccc79885-9pvd5_openstack-operators(1ac62c06-bfa2-435e-a497-7d0ce40f0fd4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:05:04 crc kubenswrapper[4981]: E0227 19:05:04.800998 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-9pvd5" podUID="1ac62c06-bfa2-435e-a497-7d0ce40f0fd4" Feb 27 19:05:05 crc kubenswrapper[4981]: E0227 19:05:05.877449 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd" Feb 27 19:05:05 crc kubenswrapper[4981]: E0227 19:05:05.878626 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fb6rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5d86c7ddb7-jvb62_openstack-operators(70ce2fb0-509d-4f5a-aff5-8b71df9f78c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:05:05 crc kubenswrapper[4981]: E0227 19:05:05.879956 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-jvb62" podUID="70ce2fb0-509d-4f5a-aff5-8b71df9f78c4" Feb 27 19:05:07 crc kubenswrapper[4981]: E0227 19:05:07.271234 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968" Feb 27 19:05:07 crc kubenswrapper[4981]: E0227 19:05:07.271477 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x2z9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-55b5ff4dbb-w46z2_openstack-operators(f28d8002-92dc-43b8-a2d5-858fd350c18c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:05:07 crc kubenswrapper[4981]: E0227 19:05:07.273116 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-w46z2" podUID="f28d8002-92dc-43b8-a2d5-858fd350c18c" Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.655683 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph"] Feb 27 19:05:09 crc kubenswrapper[4981]: W0227 19:05:09.675142 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87494b7e_5ff9_4bbf_b2b6_848c5d9269dc.slice/crio-7df3f2bafbd9ba9e162e6a9599253a14077e572a652c53d9fb30582476d5a512 WatchSource:0}: Error finding container 7df3f2bafbd9ba9e162e6a9599253a14077e572a652c53d9fb30582476d5a512: Status 404 returned error can't find the container with id 7df3f2bafbd9ba9e162e6a9599253a14077e572a652c53d9fb30582476d5a512 Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.791441 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km"] Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.802817 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm"] Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.844931 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wlwb7" event={"ID":"e210121e-ac51-4667-9bbc-7080ed583a49","Type":"ContainerStarted","Data":"c0779352959fa19584e62400174a2048e524b3a9509287a502d155388460ff94"} Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.855304 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-xzqdb" event={"ID":"2691a6d3-7eae-4d8c-b2e4-2157b87f0766","Type":"ContainerStarted","Data":"f3ba5c0de0b0cdb539ca3fcae4a6ed1e502ef53fb8227e892ea0bc4b7a93a142"} Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.855909 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-xzqdb" Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.858547 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-59gjc" event={"ID":"2741c246-6bf8-411d-bdd7-29cb20588c0c","Type":"ContainerStarted","Data":"ad14b584b3e0b16cfd912357c6f6187e0fadf66f97e67939d568dc192bb8af8f"} Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.858934 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-59gjc" Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.860775 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-k2kn8" event={"ID":"e1c487e5-53af-41ef-8713-87d17ab9632d","Type":"ContainerStarted","Data":"eb2a3e03bf3e2ea1d8be92b2974b4f35b810b55f4f15c777c1d2512943f1fd59"} Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.861113 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-k2kn8" Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.863584 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kvkg" event={"ID":"5e144a53-3c1b-49db-9f08-d93ebe9fb576","Type":"ContainerStarted","Data":"a3f9a826767153a63fbc5e844b8f988150a6d055d1dfcd66489486da0896a72e"} Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.864206 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kvkg" Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.864796 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wlwb7" podStartSLOduration=2.443831757 podStartE2EDuration="51.864776172s" podCreationTimestamp="2026-02-27 19:04:18 +0000 UTC" firstStartedPulling="2026-02-27 19:04:19.865797102 +0000 UTC m=+1159.344578272" lastFinishedPulling="2026-02-27 19:05:09.286741517 +0000 UTC m=+1208.765522687" observedRunningTime="2026-02-27 19:05:09.862294326 +0000 UTC m=+1209.341075486" watchObservedRunningTime="2026-02-27 19:05:09.864776172 +0000 UTC m=+1209.343557332" Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.887258 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" event={"ID":"87494b7e-5ff9-4bbf-b2b6-848c5d9269dc","Type":"ContainerStarted","Data":"7df3f2bafbd9ba9e162e6a9599253a14077e572a652c53d9fb30582476d5a512"} Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.900202 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-59gjc" podStartSLOduration=2.910236388 podStartE2EDuration="52.900187903s" podCreationTimestamp="2026-02-27 19:04:17 +0000 UTC" firstStartedPulling="2026-02-27 19:04:19.137424877 +0000 UTC m=+1158.616206037" lastFinishedPulling="2026-02-27 19:05:09.127376392 +0000 UTC m=+1208.606157552" observedRunningTime="2026-02-27 19:05:09.897261434 +0000 UTC m=+1209.376042594" watchObservedRunningTime="2026-02-27 19:05:09.900187903 +0000 UTC m=+1209.378969063" Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.908360 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-bvts5" event={"ID":"7cbe4d2e-bd57-452d-b873-709e1de024e7","Type":"ContainerStarted","Data":"bb142f4dcdf123ec7405a33f0b0ff671f323ef6c826d05803d7b25994c4cc6db"} Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.908963 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-bvts5" Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.917844 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-tjvzl" event={"ID":"2cd1d521-194a-48fa-9412-a95ff0c2c598","Type":"ContainerStarted","Data":"42fd688e1e18c4a12ac99bf91f261ef3caec77625ccce5604193e42346d44174"} Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.918471 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-tjvzl" Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.923587 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-jtd4l" event={"ID":"fdf56547-b3d3-4481-acea-493c4ea4b2d9","Type":"ContainerStarted","Data":"389087b093f4525892cf573b4dfe4e8e60bbffbda9b57fd73c25137f87b10157"} Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.924176 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-jtd4l" Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.925883 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-xzqdb" podStartSLOduration=10.382248402 podStartE2EDuration="51.925866056s" podCreationTimestamp="2026-02-27 19:04:18 +0000 UTC" firstStartedPulling="2026-02-27 19:04:19.766249202 +0000 UTC m=+1159.245030362" lastFinishedPulling="2026-02-27 19:05:01.309866816 +0000 UTC m=+1200.788648016" observedRunningTime="2026-02-27 19:05:09.922650399 +0000 UTC m=+1209.401431559" watchObservedRunningTime="2026-02-27 19:05:09.925866056 +0000 UTC m=+1209.404647216" Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.935784 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-rb9hs" event={"ID":"93a2ac79-08d9-4559-a954-bdf0b2eb4dab","Type":"ContainerStarted","Data":"0b96d30f2f7380ec819f7ee48c44837725af7020f001a95d5b479993b015aa74"} Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.936393 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-rb9hs" Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.941967 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-dhczx" event={"ID":"a21f22e5-6cc2-43cc-890c-c9e42d8b12c5","Type":"ContainerStarted","Data":"51a2c40327bd209b0bd8c3ec3481d640b0f78b2e9954e44dc6969786b857a9ba"} Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.942797 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-dhczx" Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.943898 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-sbs2q" event={"ID":"12a33549-4f35-4fe1-851c-21e46a44dff6","Type":"ContainerStarted","Data":"edb477bcabe32936fe54d32507bba4e493b6688a526dedab1e4f6ada70a7499b"} Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.944279 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-sbs2q" Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.945577 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km" event={"ID":"2042b3a5-c802-49c2-911b-b28eb19aecf5","Type":"ContainerStarted","Data":"00f66dce069e7d67f5a7e09057394166f3b28546aa7f85dd61885e93c7b9489c"} Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.946354 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" event={"ID":"d5f991dd-8062-43a8-8725-7a60c5a27a14","Type":"ContainerStarted","Data":"3584633b1e2039c003bdaa6d2eabc1232646dc9f5cff124822b0c2bb09d76306"} Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.947268 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-9fqts" event={"ID":"7a1c1676-014d-4de0-ab20-a951ad5bb7fe","Type":"ContainerStarted","Data":"59fe8f9ce1db8065dece396c76ee79396aa0a8bc24299a065ca4c234c89d4c72"} Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.947752 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-9fqts" Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.997533 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-k2kn8" podStartSLOduration=3.501040113 podStartE2EDuration="52.997514184s" podCreationTimestamp="2026-02-27 19:04:17 +0000 UTC" firstStartedPulling="2026-02-27 19:04:19.63085755 +0000 UTC m=+1159.109638710" lastFinishedPulling="2026-02-27 19:05:09.127331591 +0000 UTC m=+1208.606112781" observedRunningTime="2026-02-27 19:05:09.968125936 +0000 UTC m=+1209.446907096" watchObservedRunningTime="2026-02-27 19:05:09.997514184 +0000 UTC m=+1209.476295334" Feb 27 19:05:09 crc kubenswrapper[4981]: I0227 19:05:09.998449 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-9fqts" podStartSLOduration=27.071359068 podStartE2EDuration="52.998442263s" podCreationTimestamp="2026-02-27 19:04:17 +0000 UTC" firstStartedPulling="2026-02-27 19:04:18.883550148 +0000 UTC m=+1158.362331308" lastFinishedPulling="2026-02-27 19:04:44.810633313 +0000 UTC m=+1184.289414503" observedRunningTime="2026-02-27 19:05:09.994001236 +0000 UTC m=+1209.472782406" watchObservedRunningTime="2026-02-27 19:05:09.998442263 +0000 UTC m=+1209.477223413" Feb 27 19:05:10 crc kubenswrapper[4981]: I0227 19:05:10.064973 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-sbs2q" podStartSLOduration=3.666934737 podStartE2EDuration="53.064955863s" podCreationTimestamp="2026-02-27 19:04:17 +0000 UTC" firstStartedPulling="2026-02-27 19:04:19.782473168 +0000 UTC m=+1159.261254328" lastFinishedPulling="2026-02-27 19:05:09.180494294 +0000 UTC m=+1208.659275454" observedRunningTime="2026-02-27 19:05:10.061503987 +0000 UTC m=+1209.540285147" watchObservedRunningTime="2026-02-27 19:05:10.064955863 +0000 UTC m=+1209.543737023" Feb 27 19:05:10 crc kubenswrapper[4981]: I0227 19:05:10.067093 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-bvts5" podStartSLOduration=24.684835952 podStartE2EDuration="52.067087407s" podCreationTimestamp="2026-02-27 19:04:18 +0000 UTC" firstStartedPulling="2026-02-27 19:04:19.638981938 +0000 UTC m=+1159.117763098" lastFinishedPulling="2026-02-27 19:04:47.021233353 +0000 UTC m=+1186.500014553" observedRunningTime="2026-02-27 19:05:10.016874025 +0000 UTC m=+1209.495655185" watchObservedRunningTime="2026-02-27 19:05:10.067087407 +0000 UTC m=+1209.545868557" Feb 27 19:05:10 crc kubenswrapper[4981]: I0227 19:05:10.091279 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-tjvzl" podStartSLOduration=8.797687326 podStartE2EDuration="53.091262925s" podCreationTimestamp="2026-02-27 19:04:17 +0000 UTC" firstStartedPulling="2026-02-27 19:04:19.766128759 +0000 UTC m=+1159.244909919" lastFinishedPulling="2026-02-27 19:05:04.059704318 +0000 UTC m=+1203.538485518" observedRunningTime="2026-02-27 19:05:10.090007977 +0000 UTC m=+1209.568789137" watchObservedRunningTime="2026-02-27 19:05:10.091262925 +0000 UTC m=+1209.570044085" Feb 27 19:05:10 crc kubenswrapper[4981]: I0227 19:05:10.119621 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kvkg" podStartSLOduration=25.444737222 podStartE2EDuration="53.11960553s" podCreationTimestamp="2026-02-27 19:04:17 +0000 UTC" firstStartedPulling="2026-02-27 19:04:19.346754417 +0000 UTC m=+1158.825535577" lastFinishedPulling="2026-02-27 19:04:47.021622695 +0000 UTC m=+1186.500403885" observedRunningTime="2026-02-27 19:05:10.117231238 +0000 UTC m=+1209.596012398" watchObservedRunningTime="2026-02-27 19:05:10.11960553 +0000 UTC m=+1209.598386690" Feb 27 19:05:10 crc kubenswrapper[4981]: I0227 19:05:10.142692 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-dhczx" podStartSLOduration=3.350931671 podStartE2EDuration="53.142673925s" podCreationTimestamp="2026-02-27 19:04:17 +0000 UTC" firstStartedPulling="2026-02-27 19:04:19.336083652 +0000 UTC m=+1158.814864812" lastFinishedPulling="2026-02-27 19:05:09.127825906 +0000 UTC m=+1208.606607066" observedRunningTime="2026-02-27 19:05:10.141581601 +0000 UTC m=+1209.620362781" watchObservedRunningTime="2026-02-27 19:05:10.142673925 +0000 UTC m=+1209.621455085" Feb 27 19:05:10 crc kubenswrapper[4981]: I0227 19:05:10.167206 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-rb9hs" podStartSLOduration=7.758946414 podStartE2EDuration="52.167186693s" podCreationTimestamp="2026-02-27 19:04:18 +0000 UTC" firstStartedPulling="2026-02-27 19:04:19.651457428 +0000 UTC m=+1159.130238598" lastFinishedPulling="2026-02-27 19:05:04.059697687 +0000 UTC m=+1203.538478877" observedRunningTime="2026-02-27 19:05:10.163243263 +0000 UTC m=+1209.642024423" watchObservedRunningTime="2026-02-27 19:05:10.167186693 +0000 UTC m=+1209.645967863" Feb 27 19:05:10 crc kubenswrapper[4981]: I0227 19:05:10.954413 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-6fkss" event={"ID":"67789b9f-79ac-4901-8acc-22a86fb876c4","Type":"ContainerStarted","Data":"5c3a85186c07e8606045bec2dbc8374e08c2f1b5cb21d0822f0d2ad6e99516f9"} Feb 27 19:05:10 crc kubenswrapper[4981]: I0227 19:05:10.955238 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-6fkss" Feb 27 19:05:10 crc kubenswrapper[4981]: I0227 19:05:10.961781 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" event={"ID":"d5f991dd-8062-43a8-8725-7a60c5a27a14","Type":"ContainerStarted","Data":"0b0c202b0199fdc070e354cb4f1ec78c8459a965c9ba8c48e8a967e6ae588ebe"} Feb 27 19:05:10 crc kubenswrapper[4981]: I0227 19:05:10.962324 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:05:10 crc kubenswrapper[4981]: I0227 19:05:10.963848 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-h9cbz" event={"ID":"e9987372-8f11-4038-939a-75d1152e5667","Type":"ContainerStarted","Data":"292cc498f873569a6d9dca46a928541031a1628b5cee5f151df08bff68bdd8df"} Feb 27 19:05:10 crc kubenswrapper[4981]: I0227 19:05:10.964174 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-h9cbz" Feb 27 19:05:10 crc kubenswrapper[4981]: I0227 19:05:10.972065 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-6fkss" podStartSLOduration=4.183678021 podStartE2EDuration="53.972042212s" podCreationTimestamp="2026-02-27 19:04:17 +0000 UTC" firstStartedPulling="2026-02-27 19:04:19.339099614 +0000 UTC m=+1158.817880774" lastFinishedPulling="2026-02-27 19:05:09.127463805 +0000 UTC m=+1208.606244965" observedRunningTime="2026-02-27 19:05:10.970284368 +0000 UTC m=+1210.449065528" watchObservedRunningTime="2026-02-27 19:05:10.972042212 +0000 UTC m=+1210.450823372" Feb 27 19:05:10 crc kubenswrapper[4981]: I0227 19:05:10.974623 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-jtd4l" podStartSLOduration=11.3752175 podStartE2EDuration="53.974614581s" podCreationTimestamp="2026-02-27 19:04:17 +0000 UTC" firstStartedPulling="2026-02-27 19:04:18.709623609 +0000 UTC m=+1158.188404779" lastFinishedPulling="2026-02-27 19:05:01.30902066 +0000 UTC m=+1200.787801860" observedRunningTime="2026-02-27 19:05:10.19820415 +0000 UTC m=+1209.676985310" watchObservedRunningTime="2026-02-27 19:05:10.974614581 +0000 UTC m=+1210.453395741" Feb 27 19:05:10 crc kubenswrapper[4981]: I0227 19:05:10.987131 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-h9cbz" podStartSLOduration=3.633967965 podStartE2EDuration="52.987114382s" podCreationTimestamp="2026-02-27 19:04:18 +0000 UTC" firstStartedPulling="2026-02-27 19:04:19.774312898 +0000 UTC m=+1159.253094058" lastFinishedPulling="2026-02-27 19:05:09.127459265 +0000 UTC m=+1208.606240475" observedRunningTime="2026-02-27 19:05:10.983399159 +0000 UTC m=+1210.462180319" watchObservedRunningTime="2026-02-27 19:05:10.987114382 +0000 UTC m=+1210.465895542" Feb 27 19:05:11 crc kubenswrapper[4981]: I0227 19:05:11.009630 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" podStartSLOduration=53.009610809 podStartE2EDuration="53.009610809s" podCreationTimestamp="2026-02-27 19:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:05:11.004283276 +0000 UTC m=+1210.483064436" watchObservedRunningTime="2026-02-27 19:05:11.009610809 +0000 UTC m=+1210.488391969" Feb 27 19:05:13 crc kubenswrapper[4981]: I0227 19:05:12.934350 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-jfmbn" podUID="24537f79-2aa5-4ba1-afc0-e91183569040" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 19:05:15 crc kubenswrapper[4981]: I0227 19:05:15.048627 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" event={"ID":"87494b7e-5ff9-4bbf-b2b6-848c5d9269dc","Type":"ContainerStarted","Data":"211b6a75574a614de811508eb99bf3f4c651884d6286e86f5816f2869fc8ca51"} Feb 27 19:05:15 crc kubenswrapper[4981]: I0227 19:05:15.048704 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" Feb 27 19:05:15 crc kubenswrapper[4981]: I0227 19:05:15.049847 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km" event={"ID":"2042b3a5-c802-49c2-911b-b28eb19aecf5","Type":"ContainerStarted","Data":"f334c685665d7c30dfb83ce81a94ed0d52f30d7a2eb7bcafd19c88eae59c4434"} Feb 27 19:05:15 crc kubenswrapper[4981]: I0227 19:05:15.050032 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km" Feb 27 19:05:15 crc kubenswrapper[4981]: I0227 19:05:15.097829 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km" podStartSLOduration=53.600892492 podStartE2EDuration="58.097815775s" podCreationTimestamp="2026-02-27 19:04:17 +0000 UTC" firstStartedPulling="2026-02-27 19:05:09.791875416 +0000 UTC m=+1209.270656576" lastFinishedPulling="2026-02-27 19:05:14.288798699 +0000 UTC m=+1213.767579859" observedRunningTime="2026-02-27 19:05:15.093760731 +0000 UTC m=+1214.572541891" watchObservedRunningTime="2026-02-27 19:05:15.097815775 +0000 UTC m=+1214.576596935" Feb 27 19:05:15 crc kubenswrapper[4981]: I0227 19:05:15.099914 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" podStartSLOduration=52.486939382 podStartE2EDuration="57.099906118s" podCreationTimestamp="2026-02-27 19:04:18 +0000 UTC" firstStartedPulling="2026-02-27 19:05:09.679613469 +0000 UTC m=+1209.158394619" lastFinishedPulling="2026-02-27 19:05:14.292580155 +0000 UTC m=+1213.771361355" observedRunningTime="2026-02-27 19:05:15.079807224 +0000 UTC m=+1214.558588374" watchObservedRunningTime="2026-02-27 19:05:15.099906118 +0000 UTC m=+1214.578687278" Feb 27 19:05:16 crc kubenswrapper[4981]: E0227 19:05:16.631933 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-6xfvc" podUID="1636f598-89d5-474c-85a9-69ea06f889de" Feb 27 19:05:18 crc kubenswrapper[4981]: I0227 19:05:18.070203 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-9fqts" Feb 27 19:05:18 crc kubenswrapper[4981]: I0227 19:05:18.094920 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-jtd4l" Feb 27 19:05:18 crc kubenswrapper[4981]: I0227 19:05:18.110664 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-sbs2q" Feb 27 19:05:18 crc kubenswrapper[4981]: I0227 19:05:18.171888 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-59gjc" Feb 27 19:05:18 crc kubenswrapper[4981]: I0227 19:05:18.208744 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-dhczx" Feb 27 19:05:18 crc kubenswrapper[4981]: I0227 19:05:18.286492 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kvkg" Feb 27 19:05:18 crc kubenswrapper[4981]: I0227 19:05:18.295549 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-6fkss" Feb 27 19:05:18 crc kubenswrapper[4981]: I0227 19:05:18.392720 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-bvts5" Feb 27 19:05:18 crc kubenswrapper[4981]: I0227 19:05:18.415745 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-h9cbz" Feb 27 19:05:18 crc kubenswrapper[4981]: I0227 19:05:18.444709 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-tjvzl" Feb 27 19:05:18 crc kubenswrapper[4981]: I0227 19:05:18.600078 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-k2kn8" Feb 27 19:05:18 crc kubenswrapper[4981]: E0227 19:05:18.632911 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-9pvd5" podUID="1ac62c06-bfa2-435e-a497-7d0ce40f0fd4" Feb 27 19:05:18 crc kubenswrapper[4981]: I0227 19:05:18.744925 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-xzqdb" Feb 27 19:05:18 crc kubenswrapper[4981]: I0227 19:05:18.772809 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-rb9hs" Feb 27 19:05:20 crc kubenswrapper[4981]: I0227 19:05:20.029801 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-r27km" Feb 27 19:05:20 crc kubenswrapper[4981]: I0227 19:05:20.032674 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9chxvph" Feb 27 19:05:20 crc kubenswrapper[4981]: I0227 19:05:20.248743 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:05:20 crc kubenswrapper[4981]: I0227 19:05:20.248793 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:05:20 crc kubenswrapper[4981]: I0227 19:05:20.248832 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 19:05:20 crc kubenswrapper[4981]: I0227 19:05:20.249267 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"295fa1abf26d7f71e7264b907ce20f7606d63942d5385b64cf4bd1f2c3c45c16"} pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:05:20 crc kubenswrapper[4981]: I0227 19:05:20.249312 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" containerID="cri-o://295fa1abf26d7f71e7264b907ce20f7606d63942d5385b64cf4bd1f2c3c45c16" gracePeriod=600 Feb 27 19:05:20 crc kubenswrapper[4981]: I0227 19:05:20.384309 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-bq9tz" event={"ID":"8120c80b-1df9-4534-b5c6-1ff42e7dd5f9","Type":"ContainerStarted","Data":"8bad8b0fe996646b23e79f0650d33227ab38baf14455d873280b8a5821253b37"} Feb 27 19:05:20 crc kubenswrapper[4981]: I0227 19:05:20.384620 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-bq9tz" Feb 27 19:05:20 crc kubenswrapper[4981]: I0227 19:05:20.396818 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-67c78cbb8b-dmjqm" Feb 27 19:05:20 crc kubenswrapper[4981]: I0227 19:05:20.404829 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-bq9tz" podStartSLOduration=4.497005324 podStartE2EDuration="1m3.404809255s" podCreationTimestamp="2026-02-27 19:04:17 +0000 UTC" firstStartedPulling="2026-02-27 19:04:19.620664238 +0000 UTC m=+1159.099445418" lastFinishedPulling="2026-02-27 19:05:18.528468189 +0000 UTC m=+1218.007249349" observedRunningTime="2026-02-27 19:05:20.399674588 +0000 UTC m=+1219.878455758" watchObservedRunningTime="2026-02-27 19:05:20.404809255 +0000 UTC m=+1219.883590415" Feb 27 19:05:20 crc kubenswrapper[4981]: E0227 19:05:20.629989 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-jvb62" podUID="70ce2fb0-509d-4f5a-aff5-8b71df9f78c4" Feb 27 19:05:21 crc kubenswrapper[4981]: I0227 19:05:21.396708 4981 generic.go:334] "Generic (PLEG): container finished" podID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerID="295fa1abf26d7f71e7264b907ce20f7606d63942d5385b64cf4bd1f2c3c45c16" exitCode=0 Feb 27 19:05:21 crc kubenswrapper[4981]: I0227 19:05:21.396786 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerDied","Data":"295fa1abf26d7f71e7264b907ce20f7606d63942d5385b64cf4bd1f2c3c45c16"} Feb 27 19:05:21 crc kubenswrapper[4981]: I0227 19:05:21.397385 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerStarted","Data":"03bdafd14e1d7a2332dfab716d224757c23e9832e5c4bc0ebaf94e7e0e277e07"} Feb 27 19:05:21 crc kubenswrapper[4981]: I0227 19:05:21.397422 4981 scope.go:117] "RemoveContainer" containerID="219fa48bb79b5cd44ef23b0ba5b266e3305b85445a083e120d72a5d185159bb6" Feb 27 19:05:21 crc kubenswrapper[4981]: I0227 19:05:21.399396 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-sblkk" event={"ID":"96d76f06-213f-4b51-9dfa-7e77c5b97174","Type":"ContainerStarted","Data":"9517c9ed6c29b72abf0b309ed51225f8996c5e95cd4ec6fa55874c998d56661f"} Feb 27 19:05:22 crc kubenswrapper[4981]: E0227 19:05:22.088622 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968\\\"\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-w46z2" podUID="f28d8002-92dc-43b8-a2d5-858fd350c18c" Feb 27 19:05:22 crc kubenswrapper[4981]: I0227 19:05:22.409889 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-sblkk" Feb 27 19:05:22 crc kubenswrapper[4981]: I0227 19:05:22.431642 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-sblkk" podStartSLOduration=4.068237846 podStartE2EDuration="1m5.431624295s" podCreationTimestamp="2026-02-27 19:04:17 +0000 UTC" firstStartedPulling="2026-02-27 19:04:19.63546589 +0000 UTC m=+1159.114247050" lastFinishedPulling="2026-02-27 19:05:20.998852309 +0000 UTC m=+1220.477633499" observedRunningTime="2026-02-27 19:05:22.43046069 +0000 UTC m=+1221.909241850" watchObservedRunningTime="2026-02-27 19:05:22.431624295 +0000 UTC m=+1221.910405455" Feb 27 19:05:28 crc kubenswrapper[4981]: I0227 19:05:28.585960 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-sblkk" Feb 27 19:05:28 crc kubenswrapper[4981]: I0227 19:05:28.636219 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-bq9tz" Feb 27 19:05:29 crc kubenswrapper[4981]: I0227 19:05:29.485314 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-6xfvc" event={"ID":"1636f598-89d5-474c-85a9-69ea06f889de","Type":"ContainerStarted","Data":"099acf439c5132dae339fbdd6fb09b9abf4d78b0637214310b5da9caeb4097fe"} Feb 27 19:05:29 crc kubenswrapper[4981]: I0227 19:05:29.486097 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54688575f-6xfvc" Feb 27 19:05:29 crc kubenswrapper[4981]: I0227 19:05:29.521490 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54688575f-6xfvc" podStartSLOduration=3.418304048 podStartE2EDuration="1m12.521465209s" podCreationTimestamp="2026-02-27 19:04:17 +0000 UTC" firstStartedPulling="2026-02-27 19:04:19.790865545 +0000 UTC m=+1159.269646705" lastFinishedPulling="2026-02-27 19:05:28.894026696 +0000 UTC m=+1228.372807866" observedRunningTime="2026-02-27 19:05:29.511126564 +0000 UTC m=+1228.989907764" watchObservedRunningTime="2026-02-27 19:05:29.521465209 +0000 UTC m=+1229.000246409" Feb 27 19:05:32 crc kubenswrapper[4981]: I0227 19:05:32.522104 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-9pvd5" event={"ID":"1ac62c06-bfa2-435e-a497-7d0ce40f0fd4","Type":"ContainerStarted","Data":"8af83b81404558eebc0362e711d71fdd4badb1bf0b0b95ac15d0388d3ff72553"} Feb 27 19:05:32 crc kubenswrapper[4981]: I0227 19:05:32.523285 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-9pvd5" Feb 27 19:05:32 crc kubenswrapper[4981]: I0227 19:05:32.525143 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-jvb62" event={"ID":"70ce2fb0-509d-4f5a-aff5-8b71df9f78c4","Type":"ContainerStarted","Data":"7cc5c71b4f96ec754f09d455ba15257cbbda7d47e3e5f29861683f94e48fa86e"} Feb 27 19:05:32 crc kubenswrapper[4981]: I0227 19:05:32.525421 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-jvb62" Feb 27 19:05:32 crc kubenswrapper[4981]: I0227 19:05:32.539465 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-9pvd5" podStartSLOduration=2.211068021 podStartE2EDuration="1m14.539437445s" podCreationTimestamp="2026-02-27 19:04:18 +0000 UTC" firstStartedPulling="2026-02-27 19:04:19.85755585 +0000 UTC m=+1159.336337010" lastFinishedPulling="2026-02-27 19:05:32.185925244 +0000 UTC m=+1231.664706434" observedRunningTime="2026-02-27 19:05:32.537409894 +0000 UTC m=+1232.016191064" watchObservedRunningTime="2026-02-27 19:05:32.539437445 +0000 UTC m=+1232.018218645" Feb 27 19:05:32 crc kubenswrapper[4981]: I0227 19:05:32.574493 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-jvb62" podStartSLOduration=3.047336542 podStartE2EDuration="1m15.574468394s" podCreationTimestamp="2026-02-27 19:04:17 +0000 UTC" firstStartedPulling="2026-02-27 19:04:19.802555751 +0000 UTC m=+1159.281336911" lastFinishedPulling="2026-02-27 19:05:32.329687603 +0000 UTC m=+1231.808468763" observedRunningTime="2026-02-27 19:05:32.559184088 +0000 UTC m=+1232.037965258" watchObservedRunningTime="2026-02-27 19:05:32.574468394 +0000 UTC m=+1232.053249584" Feb 27 19:05:35 crc kubenswrapper[4981]: I0227 19:05:35.556633 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-w46z2" event={"ID":"f28d8002-92dc-43b8-a2d5-858fd350c18c","Type":"ContainerStarted","Data":"7bc82fdd33eeff2ab4b99de6d53072834e4e49e7f895d32086dde8e775b51449"} Feb 27 19:05:35 crc kubenswrapper[4981]: I0227 19:05:35.557497 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-w46z2" Feb 27 19:05:38 crc kubenswrapper[4981]: I0227 19:05:38.313864 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54688575f-6xfvc" Feb 27 19:05:38 crc kubenswrapper[4981]: I0227 19:05:38.339101 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-w46z2" podStartSLOduration=5.032593779 podStartE2EDuration="1m20.339050188s" podCreationTimestamp="2026-02-27 19:04:18 +0000 UTC" firstStartedPulling="2026-02-27 19:04:19.816858747 +0000 UTC m=+1159.295639907" lastFinishedPulling="2026-02-27 19:05:35.123315116 +0000 UTC m=+1234.602096316" observedRunningTime="2026-02-27 19:05:35.57287279 +0000 UTC m=+1235.051653990" watchObservedRunningTime="2026-02-27 19:05:38.339050188 +0000 UTC m=+1237.817831358" Feb 27 19:05:38 crc kubenswrapper[4981]: I0227 19:05:38.367885 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-jvb62" Feb 27 19:05:38 crc kubenswrapper[4981]: I0227 19:05:38.849957 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-9pvd5" Feb 27 19:05:48 crc kubenswrapper[4981]: I0227 19:05:48.802982 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-w46z2" Feb 27 19:06:00 crc kubenswrapper[4981]: I0227 19:06:00.404606 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536986-48nk8"] Feb 27 19:06:00 crc kubenswrapper[4981]: E0227 19:06:00.405582 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cce3f7fc-3761-458a-91ed-53ff41805400" containerName="oc" Feb 27 19:06:00 crc kubenswrapper[4981]: I0227 19:06:00.405599 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce3f7fc-3761-458a-91ed-53ff41805400" containerName="oc" Feb 27 19:06:00 crc kubenswrapper[4981]: I0227 19:06:00.405812 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cce3f7fc-3761-458a-91ed-53ff41805400" containerName="oc" Feb 27 19:06:00 crc kubenswrapper[4981]: I0227 19:06:00.406393 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536986-48nk8" Feb 27 19:06:00 crc kubenswrapper[4981]: I0227 19:06:00.409758 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:06:00 crc kubenswrapper[4981]: I0227 19:06:00.409980 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:06:00 crc kubenswrapper[4981]: I0227 19:06:00.411014 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:06:00 crc kubenswrapper[4981]: I0227 19:06:00.421371 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536986-48nk8"] Feb 27 19:06:00 crc kubenswrapper[4981]: I0227 19:06:00.542270 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd5wr\" (UniqueName: \"kubernetes.io/projected/fdb88422-3167-4822-80a5-2e9abcb29904-kube-api-access-qd5wr\") pod \"auto-csr-approver-29536986-48nk8\" (UID: \"fdb88422-3167-4822-80a5-2e9abcb29904\") " pod="openshift-infra/auto-csr-approver-29536986-48nk8" Feb 27 19:06:00 crc kubenswrapper[4981]: I0227 19:06:00.644089 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd5wr\" (UniqueName: \"kubernetes.io/projected/fdb88422-3167-4822-80a5-2e9abcb29904-kube-api-access-qd5wr\") pod \"auto-csr-approver-29536986-48nk8\" (UID: \"fdb88422-3167-4822-80a5-2e9abcb29904\") " pod="openshift-infra/auto-csr-approver-29536986-48nk8" Feb 27 19:06:00 crc kubenswrapper[4981]: I0227 19:06:00.671758 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd5wr\" (UniqueName: \"kubernetes.io/projected/fdb88422-3167-4822-80a5-2e9abcb29904-kube-api-access-qd5wr\") pod \"auto-csr-approver-29536986-48nk8\" (UID: \"fdb88422-3167-4822-80a5-2e9abcb29904\") " pod="openshift-infra/auto-csr-approver-29536986-48nk8" Feb 27 19:06:00 crc kubenswrapper[4981]: I0227 19:06:00.727382 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536986-48nk8" Feb 27 19:06:01 crc kubenswrapper[4981]: I0227 19:06:01.406647 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536986-48nk8"] Feb 27 19:06:01 crc kubenswrapper[4981]: I0227 19:06:01.775340 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536986-48nk8" event={"ID":"fdb88422-3167-4822-80a5-2e9abcb29904","Type":"ContainerStarted","Data":"3a519eabd30149b9655a32e197aec3f008f4588b638017b5df5e7ab96faea493"} Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.535310 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tm5gc"] Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.536568 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tm5gc" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.547852 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.548124 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.548272 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.548952 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-k854p" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.557011 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tm5gc"] Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.594518 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4r6lz"] Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.595555 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4r6lz" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.597752 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.610425 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4r6lz"] Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.676497 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/901a509c-93f9-4a67-9321-88794b748b17-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-4r6lz\" (UID: \"901a509c-93f9-4a67-9321-88794b748b17\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4r6lz" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.676547 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901a509c-93f9-4a67-9321-88794b748b17-config\") pod \"dnsmasq-dns-78dd6ddcc-4r6lz\" (UID: \"901a509c-93f9-4a67-9321-88794b748b17\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4r6lz" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.676581 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhxb8\" (UniqueName: \"kubernetes.io/projected/d3d98bc1-ffa8-4522-9777-4aded2d46699-kube-api-access-dhxb8\") pod \"dnsmasq-dns-675f4bcbfc-tm5gc\" (UID: \"d3d98bc1-ffa8-4522-9777-4aded2d46699\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tm5gc" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.676621 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bhk8\" (UniqueName: \"kubernetes.io/projected/901a509c-93f9-4a67-9321-88794b748b17-kube-api-access-8bhk8\") pod \"dnsmasq-dns-78dd6ddcc-4r6lz\" (UID: \"901a509c-93f9-4a67-9321-88794b748b17\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4r6lz" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.676769 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d98bc1-ffa8-4522-9777-4aded2d46699-config\") pod \"dnsmasq-dns-675f4bcbfc-tm5gc\" (UID: \"d3d98bc1-ffa8-4522-9777-4aded2d46699\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tm5gc" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.777641 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bhk8\" (UniqueName: \"kubernetes.io/projected/901a509c-93f9-4a67-9321-88794b748b17-kube-api-access-8bhk8\") pod \"dnsmasq-dns-78dd6ddcc-4r6lz\" (UID: \"901a509c-93f9-4a67-9321-88794b748b17\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4r6lz" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.778054 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d98bc1-ffa8-4522-9777-4aded2d46699-config\") pod \"dnsmasq-dns-675f4bcbfc-tm5gc\" (UID: \"d3d98bc1-ffa8-4522-9777-4aded2d46699\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tm5gc" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.778141 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/901a509c-93f9-4a67-9321-88794b748b17-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-4r6lz\" (UID: \"901a509c-93f9-4a67-9321-88794b748b17\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4r6lz" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.778172 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901a509c-93f9-4a67-9321-88794b748b17-config\") pod \"dnsmasq-dns-78dd6ddcc-4r6lz\" (UID: \"901a509c-93f9-4a67-9321-88794b748b17\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4r6lz" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.778206 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhxb8\" (UniqueName: \"kubernetes.io/projected/d3d98bc1-ffa8-4522-9777-4aded2d46699-kube-api-access-dhxb8\") pod \"dnsmasq-dns-675f4bcbfc-tm5gc\" (UID: \"d3d98bc1-ffa8-4522-9777-4aded2d46699\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tm5gc" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.779584 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d98bc1-ffa8-4522-9777-4aded2d46699-config\") pod \"dnsmasq-dns-675f4bcbfc-tm5gc\" (UID: \"d3d98bc1-ffa8-4522-9777-4aded2d46699\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tm5gc" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.780427 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901a509c-93f9-4a67-9321-88794b748b17-config\") pod \"dnsmasq-dns-78dd6ddcc-4r6lz\" (UID: \"901a509c-93f9-4a67-9321-88794b748b17\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4r6lz" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.780455 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/901a509c-93f9-4a67-9321-88794b748b17-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-4r6lz\" (UID: \"901a509c-93f9-4a67-9321-88794b748b17\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4r6lz" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.796444 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bhk8\" (UniqueName: \"kubernetes.io/projected/901a509c-93f9-4a67-9321-88794b748b17-kube-api-access-8bhk8\") pod \"dnsmasq-dns-78dd6ddcc-4r6lz\" (UID: \"901a509c-93f9-4a67-9321-88794b748b17\") " pod="openstack/dnsmasq-dns-78dd6ddcc-4r6lz" Feb 27 19:06:04 crc kubenswrapper[4981]: I0227 19:06:04.798349 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhxb8\" (UniqueName: \"kubernetes.io/projected/d3d98bc1-ffa8-4522-9777-4aded2d46699-kube-api-access-dhxb8\") pod \"dnsmasq-dns-675f4bcbfc-tm5gc\" (UID: \"d3d98bc1-ffa8-4522-9777-4aded2d46699\") " pod="openstack/dnsmasq-dns-675f4bcbfc-tm5gc" Feb 27 19:06:05 crc kubenswrapper[4981]: I0227 19:06:04.860644 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tm5gc" Feb 27 19:06:05 crc kubenswrapper[4981]: I0227 19:06:04.964466 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4r6lz" Feb 27 19:06:05 crc kubenswrapper[4981]: I0227 19:06:05.571940 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4r6lz"] Feb 27 19:06:05 crc kubenswrapper[4981]: W0227 19:06:05.576766 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod901a509c_93f9_4a67_9321_88794b748b17.slice/crio-321bc351a8a7bb81f6a4493c5cc431ad08711b2af1c27e3cb1cb594c5d33f37d WatchSource:0}: Error finding container 321bc351a8a7bb81f6a4493c5cc431ad08711b2af1c27e3cb1cb594c5d33f37d: Status 404 returned error can't find the container with id 321bc351a8a7bb81f6a4493c5cc431ad08711b2af1c27e3cb1cb594c5d33f37d Feb 27 19:06:05 crc kubenswrapper[4981]: I0227 19:06:05.663392 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tm5gc"] Feb 27 19:06:05 crc kubenswrapper[4981]: W0227 19:06:05.727085 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3d98bc1_ffa8_4522_9777_4aded2d46699.slice/crio-869f2b63bcf6e45ef30c8775df0ec9b8d223a760d218d0a1c67aacd8a65158b7 WatchSource:0}: Error finding container 869f2b63bcf6e45ef30c8775df0ec9b8d223a760d218d0a1c67aacd8a65158b7: Status 404 returned error can't find the container with id 869f2b63bcf6e45ef30c8775df0ec9b8d223a760d218d0a1c67aacd8a65158b7 Feb 27 19:06:06 crc kubenswrapper[4981]: I0227 19:06:06.060601 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-4r6lz" event={"ID":"901a509c-93f9-4a67-9321-88794b748b17","Type":"ContainerStarted","Data":"321bc351a8a7bb81f6a4493c5cc431ad08711b2af1c27e3cb1cb594c5d33f37d"} Feb 27 19:06:06 crc kubenswrapper[4981]: I0227 19:06:06.062794 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-tm5gc" event={"ID":"d3d98bc1-ffa8-4522-9777-4aded2d46699","Type":"ContainerStarted","Data":"869f2b63bcf6e45ef30c8775df0ec9b8d223a760d218d0a1c67aacd8a65158b7"} Feb 27 19:06:06 crc kubenswrapper[4981]: I0227 19:06:06.065744 4981 generic.go:334] "Generic (PLEG): container finished" podID="fdb88422-3167-4822-80a5-2e9abcb29904" containerID="f1769b116d54285295f3d509699a8536ac9a91b18a4131665c502f03f5b4e4fe" exitCode=0 Feb 27 19:06:06 crc kubenswrapper[4981]: I0227 19:06:06.065796 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536986-48nk8" event={"ID":"fdb88422-3167-4822-80a5-2e9abcb29904","Type":"ContainerDied","Data":"f1769b116d54285295f3d509699a8536ac9a91b18a4131665c502f03f5b4e4fe"} Feb 27 19:06:06 crc kubenswrapper[4981]: I0227 19:06:06.757478 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tm5gc"] Feb 27 19:06:06 crc kubenswrapper[4981]: I0227 19:06:06.790443 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-4s44w"] Feb 27 19:06:06 crc kubenswrapper[4981]: I0227 19:06:06.791860 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-4s44w" Feb 27 19:06:06 crc kubenswrapper[4981]: I0227 19:06:06.798723 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-4s44w"] Feb 27 19:06:06 crc kubenswrapper[4981]: I0227 19:06:06.911609 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44f73603-dc6b-4e0b-bbef-ab35a4783e0e-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-4s44w\" (UID: \"44f73603-dc6b-4e0b-bbef-ab35a4783e0e\") " pod="openstack/dnsmasq-dns-5ccc8479f9-4s44w" Feb 27 19:06:06 crc kubenswrapper[4981]: I0227 19:06:06.911676 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44f73603-dc6b-4e0b-bbef-ab35a4783e0e-config\") pod \"dnsmasq-dns-5ccc8479f9-4s44w\" (UID: \"44f73603-dc6b-4e0b-bbef-ab35a4783e0e\") " pod="openstack/dnsmasq-dns-5ccc8479f9-4s44w" Feb 27 19:06:06 crc kubenswrapper[4981]: I0227 19:06:06.911724 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-585jt\" (UniqueName: \"kubernetes.io/projected/44f73603-dc6b-4e0b-bbef-ab35a4783e0e-kube-api-access-585jt\") pod \"dnsmasq-dns-5ccc8479f9-4s44w\" (UID: \"44f73603-dc6b-4e0b-bbef-ab35a4783e0e\") " pod="openstack/dnsmasq-dns-5ccc8479f9-4s44w" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.013629 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44f73603-dc6b-4e0b-bbef-ab35a4783e0e-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-4s44w\" (UID: \"44f73603-dc6b-4e0b-bbef-ab35a4783e0e\") " pod="openstack/dnsmasq-dns-5ccc8479f9-4s44w" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.013700 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44f73603-dc6b-4e0b-bbef-ab35a4783e0e-config\") pod \"dnsmasq-dns-5ccc8479f9-4s44w\" (UID: \"44f73603-dc6b-4e0b-bbef-ab35a4783e0e\") " pod="openstack/dnsmasq-dns-5ccc8479f9-4s44w" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.013746 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-585jt\" (UniqueName: \"kubernetes.io/projected/44f73603-dc6b-4e0b-bbef-ab35a4783e0e-kube-api-access-585jt\") pod \"dnsmasq-dns-5ccc8479f9-4s44w\" (UID: \"44f73603-dc6b-4e0b-bbef-ab35a4783e0e\") " pod="openstack/dnsmasq-dns-5ccc8479f9-4s44w" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.014599 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44f73603-dc6b-4e0b-bbef-ab35a4783e0e-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-4s44w\" (UID: \"44f73603-dc6b-4e0b-bbef-ab35a4783e0e\") " pod="openstack/dnsmasq-dns-5ccc8479f9-4s44w" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.014873 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44f73603-dc6b-4e0b-bbef-ab35a4783e0e-config\") pod \"dnsmasq-dns-5ccc8479f9-4s44w\" (UID: \"44f73603-dc6b-4e0b-bbef-ab35a4783e0e\") " pod="openstack/dnsmasq-dns-5ccc8479f9-4s44w" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.033647 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-585jt\" (UniqueName: \"kubernetes.io/projected/44f73603-dc6b-4e0b-bbef-ab35a4783e0e-kube-api-access-585jt\") pod \"dnsmasq-dns-5ccc8479f9-4s44w\" (UID: \"44f73603-dc6b-4e0b-bbef-ab35a4783e0e\") " pod="openstack/dnsmasq-dns-5ccc8479f9-4s44w" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.108283 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-4s44w" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.402593 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536986-48nk8" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.437780 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4r6lz"] Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.464923 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dnkp5"] Feb 27 19:06:07 crc kubenswrapper[4981]: E0227 19:06:07.465230 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb88422-3167-4822-80a5-2e9abcb29904" containerName="oc" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.465246 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb88422-3167-4822-80a5-2e9abcb29904" containerName="oc" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.465391 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdb88422-3167-4822-80a5-2e9abcb29904" containerName="oc" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.466084 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-dnkp5" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.476501 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dnkp5"] Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.520337 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd5wr\" (UniqueName: \"kubernetes.io/projected/fdb88422-3167-4822-80a5-2e9abcb29904-kube-api-access-qd5wr\") pod \"fdb88422-3167-4822-80a5-2e9abcb29904\" (UID: \"fdb88422-3167-4822-80a5-2e9abcb29904\") " Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.622293 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabb19c5-377f-433c-b523-68d8e1296f9b-config\") pod \"dnsmasq-dns-57d769cc4f-dnkp5\" (UID: \"dabb19c5-377f-433c-b523-68d8e1296f9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-dnkp5" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.622388 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nssks\" (UniqueName: \"kubernetes.io/projected/dabb19c5-377f-433c-b523-68d8e1296f9b-kube-api-access-nssks\") pod \"dnsmasq-dns-57d769cc4f-dnkp5\" (UID: \"dabb19c5-377f-433c-b523-68d8e1296f9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-dnkp5" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.622521 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dabb19c5-377f-433c-b523-68d8e1296f9b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-dnkp5\" (UID: \"dabb19c5-377f-433c-b523-68d8e1296f9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-dnkp5" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.888293 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabb19c5-377f-433c-b523-68d8e1296f9b-config\") pod \"dnsmasq-dns-57d769cc4f-dnkp5\" (UID: \"dabb19c5-377f-433c-b523-68d8e1296f9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-dnkp5" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.888364 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nssks\" (UniqueName: \"kubernetes.io/projected/dabb19c5-377f-433c-b523-68d8e1296f9b-kube-api-access-nssks\") pod \"dnsmasq-dns-57d769cc4f-dnkp5\" (UID: \"dabb19c5-377f-433c-b523-68d8e1296f9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-dnkp5" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.888391 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dabb19c5-377f-433c-b523-68d8e1296f9b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-dnkp5\" (UID: \"dabb19c5-377f-433c-b523-68d8e1296f9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-dnkp5" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.889266 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dabb19c5-377f-433c-b523-68d8e1296f9b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-dnkp5\" (UID: \"dabb19c5-377f-433c-b523-68d8e1296f9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-dnkp5" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.889947 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabb19c5-377f-433c-b523-68d8e1296f9b-config\") pod \"dnsmasq-dns-57d769cc4f-dnkp5\" (UID: \"dabb19c5-377f-433c-b523-68d8e1296f9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-dnkp5" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.895600 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdb88422-3167-4822-80a5-2e9abcb29904-kube-api-access-qd5wr" (OuterVolumeSpecName: "kube-api-access-qd5wr") pod "fdb88422-3167-4822-80a5-2e9abcb29904" (UID: "fdb88422-3167-4822-80a5-2e9abcb29904"). InnerVolumeSpecName "kube-api-access-qd5wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.925892 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nssks\" (UniqueName: \"kubernetes.io/projected/dabb19c5-377f-433c-b523-68d8e1296f9b-kube-api-access-nssks\") pod \"dnsmasq-dns-57d769cc4f-dnkp5\" (UID: \"dabb19c5-377f-433c-b523-68d8e1296f9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-dnkp5" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.943776 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-4s44w"] Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.954926 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.956061 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:07 crc kubenswrapper[4981]: W0227 19:06:07.957341 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44f73603_dc6b_4e0b_bbef_ab35a4783e0e.slice/crio-cd2056e9f044acb57439bae1a206642716fe564ea703d6d4c4b8a29e44a6cf4d WatchSource:0}: Error finding container cd2056e9f044acb57439bae1a206642716fe564ea703d6d4c4b8a29e44a6cf4d: Status 404 returned error can't find the container with id cd2056e9f044acb57439bae1a206642716fe564ea703d6d4c4b8a29e44a6cf4d Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.982343 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.984629 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-q5g4z" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.984802 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.984931 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.985038 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.985218 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.985379 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.989688 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd5wr\" (UniqueName: \"kubernetes.io/projected/fdb88422-3167-4822-80a5-2e9abcb29904-kube-api-access-qd5wr\") on node \"crc\" DevicePath \"\"" Feb 27 19:06:07 crc kubenswrapper[4981]: I0227 19:06:07.992955 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.100977 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.101788 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f928877c-eaff-4ab4-ae3b-ba6ed721642c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.101828 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.101848 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.101891 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.101953 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.102010 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkpxm\" (UniqueName: \"kubernetes.io/projected/f928877c-eaff-4ab4-ae3b-ba6ed721642c-kube-api-access-hkpxm\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.102028 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.102044 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.102080 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f928877c-eaff-4ab4-ae3b-ba6ed721642c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.102111 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.122291 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-4s44w" event={"ID":"44f73603-dc6b-4e0b-bbef-ab35a4783e0e","Type":"ContainerStarted","Data":"cd2056e9f044acb57439bae1a206642716fe564ea703d6d4c4b8a29e44a6cf4d"} Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.140369 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536986-48nk8" event={"ID":"fdb88422-3167-4822-80a5-2e9abcb29904","Type":"ContainerDied","Data":"3a519eabd30149b9655a32e197aec3f008f4588b638017b5df5e7ab96faea493"} Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.140422 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a519eabd30149b9655a32e197aec3f008f4588b638017b5df5e7ab96faea493" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.140493 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536986-48nk8" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.203354 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.203452 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.203481 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.203526 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkpxm\" (UniqueName: \"kubernetes.io/projected/f928877c-eaff-4ab4-ae3b-ba6ed721642c-kube-api-access-hkpxm\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.203551 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.203574 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.203601 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f928877c-eaff-4ab4-ae3b-ba6ed721642c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.203625 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.203690 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.203722 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f928877c-eaff-4ab4-ae3b-ba6ed721642c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.203751 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.203848 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.204626 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.204990 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.205042 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.205852 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.207299 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-dnkp5" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.208718 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.212328 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f928877c-eaff-4ab4-ae3b-ba6ed721642c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.213646 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.217506 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f928877c-eaff-4ab4-ae3b-ba6ed721642c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.220275 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.232722 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.237192 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkpxm\" (UniqueName: \"kubernetes.io/projected/f928877c-eaff-4ab4-ae3b-ba6ed721642c-kube-api-access-hkpxm\") pod \"rabbitmq-cell1-server-0\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.338069 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.495029 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536980-lq4dg"] Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.500773 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536980-lq4dg"] Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.609424 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dnkp5"] Feb 27 19:06:08 crc kubenswrapper[4981]: W0227 19:06:08.615222 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddabb19c5_377f_433c_b523_68d8e1296f9b.slice/crio-e9cf6201f7011c59c0727729c169e55386d1191a84161a2ad9d706a1a98024b6 WatchSource:0}: Error finding container e9cf6201f7011c59c0727729c169e55386d1191a84161a2ad9d706a1a98024b6: Status 404 returned error can't find the container with id e9cf6201f7011c59c0727729c169e55386d1191a84161a2ad9d706a1a98024b6 Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.815192 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.915848 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.917093 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.920333 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.920384 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.920562 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.921119 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-6j7v9" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.921123 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.921251 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.921312 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 27 19:06:08 crc kubenswrapper[4981]: I0227 19:06:08.935604 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.018744 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.018796 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.018816 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.018842 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-config-data\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.018868 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwswc\" (UniqueName: \"kubernetes.io/projected/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-kube-api-access-nwswc\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.018886 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.018925 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.018959 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.018990 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.019019 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.019057 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.120226 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.120337 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.120415 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.120480 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.120509 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.120549 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-config-data\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.120595 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.120624 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwswc\" (UniqueName: \"kubernetes.io/projected/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-kube-api-access-nwswc\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.120693 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.120744 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.120796 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.121980 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.122218 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-config-data\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.122386 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.123115 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.123188 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.132145 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.134343 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.135276 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.135586 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.136841 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.139172 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwswc\" (UniqueName: \"kubernetes.io/projected/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-kube-api-access-nwswc\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.147590 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f928877c-eaff-4ab4-ae3b-ba6ed721642c","Type":"ContainerStarted","Data":"6795223a6b8b39373261959a131e16cb35f3b07b0bf0ad21ba4a93e00f66f0ab"} Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.148465 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-dnkp5" event={"ID":"dabb19c5-377f-433c-b523-68d8e1296f9b","Type":"ContainerStarted","Data":"e9cf6201f7011c59c0727729c169e55386d1191a84161a2ad9d706a1a98024b6"} Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.157228 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.242260 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.641566 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46100661-2b00-441b-a3e6-394279e60051" path="/var/lib/kubelet/pods/46100661-2b00-441b-a3e6-394279e60051/volumes" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.655886 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 19:06:09 crc kubenswrapper[4981]: W0227 19:06:09.662724 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod991e04a2_e14a_4987_a7d8_b7f5db5cb8e3.slice/crio-8f1ddb97a65a5da0da1fd55dca803bf43ca0ea1e390331a93cd9482174f63c65 WatchSource:0}: Error finding container 8f1ddb97a65a5da0da1fd55dca803bf43ca0ea1e390331a93cd9482174f63c65: Status 404 returned error can't find the container with id 8f1ddb97a65a5da0da1fd55dca803bf43ca0ea1e390331a93cd9482174f63c65 Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.742278 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.743461 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.745341 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.754036 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.754198 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-x5s65" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.756219 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.759840 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.760478 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.831878 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/918ffa1d-14dc-4215-ad79-e545616bcfc5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.831954 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.831999 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918ffa1d-14dc-4215-ad79-e545616bcfc5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.832037 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/918ffa1d-14dc-4215-ad79-e545616bcfc5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.832087 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/918ffa1d-14dc-4215-ad79-e545616bcfc5-kolla-config\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.832125 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/918ffa1d-14dc-4215-ad79-e545616bcfc5-config-data-default\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.832158 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/918ffa1d-14dc-4215-ad79-e545616bcfc5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.832208 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m94vw\" (UniqueName: \"kubernetes.io/projected/918ffa1d-14dc-4215-ad79-e545616bcfc5-kube-api-access-m94vw\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.933132 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/918ffa1d-14dc-4215-ad79-e545616bcfc5-config-data-default\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.933173 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/918ffa1d-14dc-4215-ad79-e545616bcfc5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.933215 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m94vw\" (UniqueName: \"kubernetes.io/projected/918ffa1d-14dc-4215-ad79-e545616bcfc5-kube-api-access-m94vw\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.933243 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/918ffa1d-14dc-4215-ad79-e545616bcfc5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.933272 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.933296 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918ffa1d-14dc-4215-ad79-e545616bcfc5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.933323 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/918ffa1d-14dc-4215-ad79-e545616bcfc5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.933347 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/918ffa1d-14dc-4215-ad79-e545616bcfc5-kolla-config\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.933803 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/918ffa1d-14dc-4215-ad79-e545616bcfc5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.934001 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/918ffa1d-14dc-4215-ad79-e545616bcfc5-kolla-config\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.934185 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.935931 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/918ffa1d-14dc-4215-ad79-e545616bcfc5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.936340 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/918ffa1d-14dc-4215-ad79-e545616bcfc5-config-data-default\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.937741 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918ffa1d-14dc-4215-ad79-e545616bcfc5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.940100 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/918ffa1d-14dc-4215-ad79-e545616bcfc5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.955110 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:09 crc kubenswrapper[4981]: I0227 19:06:09.968691 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m94vw\" (UniqueName: \"kubernetes.io/projected/918ffa1d-14dc-4215-ad79-e545616bcfc5-kube-api-access-m94vw\") pod \"openstack-galera-0\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " pod="openstack/openstack-galera-0" Feb 27 19:06:10 crc kubenswrapper[4981]: I0227 19:06:10.068379 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 27 19:06:10 crc kubenswrapper[4981]: I0227 19:06:10.157505 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3","Type":"ContainerStarted","Data":"8f1ddb97a65a5da0da1fd55dca803bf43ca0ea1e390331a93cd9482174f63c65"} Feb 27 19:06:10 crc kubenswrapper[4981]: I0227 19:06:10.545416 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 27 19:06:10 crc kubenswrapper[4981]: W0227 19:06:10.560047 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod918ffa1d_14dc_4215_ad79_e545616bcfc5.slice/crio-e157beb2c9fabc29967090c648f9f4962c3a2d1851fb9c578abff83008b81460 WatchSource:0}: Error finding container e157beb2c9fabc29967090c648f9f4962c3a2d1851fb9c578abff83008b81460: Status 404 returned error can't find the container with id e157beb2c9fabc29967090c648f9f4962c3a2d1851fb9c578abff83008b81460 Feb 27 19:06:11 crc kubenswrapper[4981]: I0227 19:06:11.894099 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 19:06:11 crc kubenswrapper[4981]: I0227 19:06:11.899017 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 19:06:11 crc kubenswrapper[4981]: I0227 19:06:11.899085 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"918ffa1d-14dc-4215-ad79-e545616bcfc5","Type":"ContainerStarted","Data":"e157beb2c9fabc29967090c648f9f4962c3a2d1851fb9c578abff83008b81460"} Feb 27 19:06:11 crc kubenswrapper[4981]: I0227 19:06:11.899206 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:11 crc kubenswrapper[4981]: I0227 19:06:11.902329 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 27 19:06:11 crc kubenswrapper[4981]: I0227 19:06:11.902793 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-hf2br" Feb 27 19:06:11 crc kubenswrapper[4981]: I0227 19:06:11.903159 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 27 19:06:11 crc kubenswrapper[4981]: I0227 19:06:11.903199 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.029942 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c22e070-8348-440e-a801-64927da21e98-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.029986 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1c22e070-8348-440e-a801-64927da21e98-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.030016 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c22e070-8348-440e-a801-64927da21e98-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.030034 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkjj4\" (UniqueName: \"kubernetes.io/projected/1c22e070-8348-440e-a801-64927da21e98-kube-api-access-wkjj4\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.030086 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1c22e070-8348-440e-a801-64927da21e98-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.030156 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1c22e070-8348-440e-a801-64927da21e98-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.030186 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c22e070-8348-440e-a801-64927da21e98-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.030206 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.131778 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c22e070-8348-440e-a801-64927da21e98-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.132200 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1c22e070-8348-440e-a801-64927da21e98-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.132235 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c22e070-8348-440e-a801-64927da21e98-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.132261 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkjj4\" (UniqueName: \"kubernetes.io/projected/1c22e070-8348-440e-a801-64927da21e98-kube-api-access-wkjj4\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.132305 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1c22e070-8348-440e-a801-64927da21e98-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.132347 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1c22e070-8348-440e-a801-64927da21e98-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.132374 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c22e070-8348-440e-a801-64927da21e98-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.132401 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.132682 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.133515 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1c22e070-8348-440e-a801-64927da21e98-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.134527 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1c22e070-8348-440e-a801-64927da21e98-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.135932 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c22e070-8348-440e-a801-64927da21e98-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.136534 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1c22e070-8348-440e-a801-64927da21e98-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.148513 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.151323 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.154707 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c22e070-8348-440e-a801-64927da21e98-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.157787 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c22e070-8348-440e-a801-64927da21e98-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.160611 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.160881 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-cw8kx" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.161441 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.177019 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.177450 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.192341 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkjj4\" (UniqueName: \"kubernetes.io/projected/1c22e070-8348-440e-a801-64927da21e98-kube-api-access-wkjj4\") pod \"openstack-cell1-galera-0\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.223489 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.334943 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\") " pod="openstack/memcached-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.334998 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-kolla-config\") pod \"memcached-0\" (UID: \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\") " pod="openstack/memcached-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.335038 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\") " pod="openstack/memcached-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.335070 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs55b\" (UniqueName: \"kubernetes.io/projected/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-kube-api-access-zs55b\") pod \"memcached-0\" (UID: \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\") " pod="openstack/memcached-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.335112 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-config-data\") pod \"memcached-0\" (UID: \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\") " pod="openstack/memcached-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.439892 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-config-data\") pod \"memcached-0\" (UID: \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\") " pod="openstack/memcached-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.440416 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\") " pod="openstack/memcached-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.440479 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-kolla-config\") pod \"memcached-0\" (UID: \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\") " pod="openstack/memcached-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.440526 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\") " pod="openstack/memcached-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.440550 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs55b\" (UniqueName: \"kubernetes.io/projected/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-kube-api-access-zs55b\") pod \"memcached-0\" (UID: \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\") " pod="openstack/memcached-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.440882 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-config-data\") pod \"memcached-0\" (UID: \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\") " pod="openstack/memcached-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.441495 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-kolla-config\") pod \"memcached-0\" (UID: \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\") " pod="openstack/memcached-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.446757 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\") " pod="openstack/memcached-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.449546 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\") " pod="openstack/memcached-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.459938 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs55b\" (UniqueName: \"kubernetes.io/projected/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-kube-api-access-zs55b\") pod \"memcached-0\" (UID: \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\") " pod="openstack/memcached-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.541947 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.747560 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 19:06:12 crc kubenswrapper[4981]: I0227 19:06:12.874916 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1c22e070-8348-440e-a801-64927da21e98","Type":"ContainerStarted","Data":"987f100d8f498575cccb8c24e644d5a86184497fb3949337820498f8b9fff318"} Feb 27 19:06:13 crc kubenswrapper[4981]: I0227 19:06:13.005131 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 27 19:06:13 crc kubenswrapper[4981]: W0227 19:06:13.011073 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb63f8c5e_ff68_4a07_a2a5_5c3290e21669.slice/crio-fcc2b5357d07ea6b8f91a6ff8e503b5a858e7e33b9071aa0bd4d2c33954951a7 WatchSource:0}: Error finding container fcc2b5357d07ea6b8f91a6ff8e503b5a858e7e33b9071aa0bd4d2c33954951a7: Status 404 returned error can't find the container with id fcc2b5357d07ea6b8f91a6ff8e503b5a858e7e33b9071aa0bd4d2c33954951a7 Feb 27 19:06:13 crc kubenswrapper[4981]: I0227 19:06:13.609040 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 19:06:13 crc kubenswrapper[4981]: I0227 19:06:13.610417 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 19:06:13 crc kubenswrapper[4981]: I0227 19:06:13.612848 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-px6qb" Feb 27 19:06:13 crc kubenswrapper[4981]: I0227 19:06:13.626202 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 19:06:13 crc kubenswrapper[4981]: I0227 19:06:13.762769 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gcsf\" (UniqueName: \"kubernetes.io/projected/073fb193-6587-4c6c-b20d-82a5b3075a20-kube-api-access-9gcsf\") pod \"kube-state-metrics-0\" (UID: \"073fb193-6587-4c6c-b20d-82a5b3075a20\") " pod="openstack/kube-state-metrics-0" Feb 27 19:06:13 crc kubenswrapper[4981]: I0227 19:06:13.864598 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gcsf\" (UniqueName: \"kubernetes.io/projected/073fb193-6587-4c6c-b20d-82a5b3075a20-kube-api-access-9gcsf\") pod \"kube-state-metrics-0\" (UID: \"073fb193-6587-4c6c-b20d-82a5b3075a20\") " pod="openstack/kube-state-metrics-0" Feb 27 19:06:13 crc kubenswrapper[4981]: I0227 19:06:13.893382 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gcsf\" (UniqueName: \"kubernetes.io/projected/073fb193-6587-4c6c-b20d-82a5b3075a20-kube-api-access-9gcsf\") pod \"kube-state-metrics-0\" (UID: \"073fb193-6587-4c6c-b20d-82a5b3075a20\") " pod="openstack/kube-state-metrics-0" Feb 27 19:06:13 crc kubenswrapper[4981]: I0227 19:06:13.898936 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b63f8c5e-ff68-4a07-a2a5-5c3290e21669","Type":"ContainerStarted","Data":"fcc2b5357d07ea6b8f91a6ff8e503b5a858e7e33b9071aa0bd4d2c33954951a7"} Feb 27 19:06:13 crc kubenswrapper[4981]: I0227 19:06:13.955793 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 19:06:14 crc kubenswrapper[4981]: I0227 19:06:14.184353 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 19:06:14 crc kubenswrapper[4981]: W0227 19:06:14.196579 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod073fb193_6587_4c6c_b20d_82a5b3075a20.slice/crio-5f7e3ee9635e10d6c0b683ebb18e8767a18363635bd46a2bf65f017b1428a0b2 WatchSource:0}: Error finding container 5f7e3ee9635e10d6c0b683ebb18e8767a18363635bd46a2bf65f017b1428a0b2: Status 404 returned error can't find the container with id 5f7e3ee9635e10d6c0b683ebb18e8767a18363635bd46a2bf65f017b1428a0b2 Feb 27 19:06:14 crc kubenswrapper[4981]: I0227 19:06:14.912305 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"073fb193-6587-4c6c-b20d-82a5b3075a20","Type":"ContainerStarted","Data":"5f7e3ee9635e10d6c0b683ebb18e8767a18363635bd46a2bf65f017b1428a0b2"} Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.325315 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.340015 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.340159 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.346916 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.347551 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.349887 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.352789 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.354003 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-bvqpk" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.480769 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d57cb309-6812-4de2-a172-8d0896a7d864-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.480836 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7mtx\" (UniqueName: \"kubernetes.io/projected/d57cb309-6812-4de2-a172-8d0896a7d864-kube-api-access-g7mtx\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.481051 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d57cb309-6812-4de2-a172-8d0896a7d864-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.485303 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.485367 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57cb309-6812-4de2-a172-8d0896a7d864-config\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.485406 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d57cb309-6812-4de2-a172-8d0896a7d864-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.485430 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d57cb309-6812-4de2-a172-8d0896a7d864-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.485465 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d57cb309-6812-4de2-a172-8d0896a7d864-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.586732 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7mtx\" (UniqueName: \"kubernetes.io/projected/d57cb309-6812-4de2-a172-8d0896a7d864-kube-api-access-g7mtx\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.586859 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d57cb309-6812-4de2-a172-8d0896a7d864-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.586906 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.586950 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57cb309-6812-4de2-a172-8d0896a7d864-config\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.586999 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d57cb309-6812-4de2-a172-8d0896a7d864-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.587029 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d57cb309-6812-4de2-a172-8d0896a7d864-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.587080 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d57cb309-6812-4de2-a172-8d0896a7d864-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.587141 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d57cb309-6812-4de2-a172-8d0896a7d864-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.587308 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.622213 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d57cb309-6812-4de2-a172-8d0896a7d864-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.630621 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d57cb309-6812-4de2-a172-8d0896a7d864-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.634205 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57cb309-6812-4de2-a172-8d0896a7d864-config\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.634214 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d57cb309-6812-4de2-a172-8d0896a7d864-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.641689 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d57cb309-6812-4de2-a172-8d0896a7d864-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.649784 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7mtx\" (UniqueName: \"kubernetes.io/projected/d57cb309-6812-4de2-a172-8d0896a7d864-kube-api-access-g7mtx\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.650507 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d57cb309-6812-4de2-a172-8d0896a7d864-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.666901 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:16 crc kubenswrapper[4981]: I0227 19:06:16.697644 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.482636 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.757937 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-n5d2t"] Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.759247 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.762688 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-chvp2" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.762898 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.763160 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.766869 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n5d2t"] Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.810111 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-5xwl7"] Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.812656 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.823398 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5xwl7"] Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.842829 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-4s44w"] Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.843806 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/214d65cb-9030-4093-853c-c1485fc1a30a-ovn-controller-tls-certs\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.843878 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkh5p\" (UniqueName: \"kubernetes.io/projected/214d65cb-9030-4093-853c-c1485fc1a30a-kube-api-access-vkh5p\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.843909 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/214d65cb-9030-4093-853c-c1485fc1a30a-scripts\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.843945 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/214d65cb-9030-4093-853c-c1485fc1a30a-var-run\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.843987 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214d65cb-9030-4093-853c-c1485fc1a30a-combined-ca-bundle\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.844114 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/214d65cb-9030-4093-853c-c1485fc1a30a-var-log-ovn\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.844152 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/214d65cb-9030-4093-853c-c1485fc1a30a-var-run-ovn\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.879828 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kh2tb"] Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.882646 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.887556 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.887981 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kh2tb"] Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.945709 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-etc-ovs\") pod \"ovn-controller-ovs-5xwl7\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.945790 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-var-run\") pod \"ovn-controller-ovs-5xwl7\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.945814 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/214d65cb-9030-4093-853c-c1485fc1a30a-ovn-controller-tls-certs\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.945839 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkh5p\" (UniqueName: \"kubernetes.io/projected/214d65cb-9030-4093-853c-c1485fc1a30a-kube-api-access-vkh5p\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.945861 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/214d65cb-9030-4093-853c-c1485fc1a30a-var-run\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.945884 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214d65cb-9030-4093-853c-c1485fc1a30a-combined-ca-bundle\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.945911 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-var-log\") pod \"ovn-controller-ovs-5xwl7\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.945938 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/214d65cb-9030-4093-853c-c1485fc1a30a-var-log-ovn\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.945954 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44jlg\" (UniqueName: \"kubernetes.io/projected/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-kube-api-access-44jlg\") pod \"ovn-controller-ovs-5xwl7\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.945973 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/214d65cb-9030-4093-853c-c1485fc1a30a-var-run-ovn\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.945990 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-var-lib\") pod \"ovn-controller-ovs-5xwl7\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.946013 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-scripts\") pod \"ovn-controller-ovs-5xwl7\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.946031 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/214d65cb-9030-4093-853c-c1485fc1a30a-scripts\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.948162 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/214d65cb-9030-4093-853c-c1485fc1a30a-scripts\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.950038 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/214d65cb-9030-4093-853c-c1485fc1a30a-var-run-ovn\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.950309 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/214d65cb-9030-4093-853c-c1485fc1a30a-var-run\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.952964 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/214d65cb-9030-4093-853c-c1485fc1a30a-var-log-ovn\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.956299 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/214d65cb-9030-4093-853c-c1485fc1a30a-ovn-controller-tls-certs\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.978429 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214d65cb-9030-4093-853c-c1485fc1a30a-combined-ca-bundle\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:17 crc kubenswrapper[4981]: I0227 19:06:17.991375 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkh5p\" (UniqueName: \"kubernetes.io/projected/214d65cb-9030-4093-853c-c1485fc1a30a-kube-api-access-vkh5p\") pod \"ovn-controller-n5d2t\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.055008 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klkb8\" (UniqueName: \"kubernetes.io/projected/bec5bfe1-9b74-494e-92e6-6482c06995b9-kube-api-access-klkb8\") pod \"dnsmasq-dns-7fd796d7df-kh2tb\" (UID: \"bec5bfe1-9b74-494e-92e6-6482c06995b9\") " pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.055112 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44jlg\" (UniqueName: \"kubernetes.io/projected/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-kube-api-access-44jlg\") pod \"ovn-controller-ovs-5xwl7\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.055169 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-var-lib\") pod \"ovn-controller-ovs-5xwl7\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.055207 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-scripts\") pod \"ovn-controller-ovs-5xwl7\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.055256 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bec5bfe1-9b74-494e-92e6-6482c06995b9-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-kh2tb\" (UID: \"bec5bfe1-9b74-494e-92e6-6482c06995b9\") " pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.055284 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-etc-ovs\") pod \"ovn-controller-ovs-5xwl7\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.055324 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bec5bfe1-9b74-494e-92e6-6482c06995b9-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-kh2tb\" (UID: \"bec5bfe1-9b74-494e-92e6-6482c06995b9\") " pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.055354 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bec5bfe1-9b74-494e-92e6-6482c06995b9-config\") pod \"dnsmasq-dns-7fd796d7df-kh2tb\" (UID: \"bec5bfe1-9b74-494e-92e6-6482c06995b9\") " pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.055385 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-var-run\") pod \"ovn-controller-ovs-5xwl7\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.055457 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-var-log\") pod \"ovn-controller-ovs-5xwl7\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.055831 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-var-log\") pod \"ovn-controller-ovs-5xwl7\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.056344 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-var-run\") pod \"ovn-controller-ovs-5xwl7\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.056332 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-etc-ovs\") pod \"ovn-controller-ovs-5xwl7\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.056497 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-var-lib\") pod \"ovn-controller-ovs-5xwl7\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.059863 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-scripts\") pod \"ovn-controller-ovs-5xwl7\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.102545 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44jlg\" (UniqueName: \"kubernetes.io/projected/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-kube-api-access-44jlg\") pod \"ovn-controller-ovs-5xwl7\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.125372 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n5d2t" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.145739 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.158556 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bec5bfe1-9b74-494e-92e6-6482c06995b9-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-kh2tb\" (UID: \"bec5bfe1-9b74-494e-92e6-6482c06995b9\") " pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.158678 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bec5bfe1-9b74-494e-92e6-6482c06995b9-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-kh2tb\" (UID: \"bec5bfe1-9b74-494e-92e6-6482c06995b9\") " pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.158710 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bec5bfe1-9b74-494e-92e6-6482c06995b9-config\") pod \"dnsmasq-dns-7fd796d7df-kh2tb\" (UID: \"bec5bfe1-9b74-494e-92e6-6482c06995b9\") " pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.158776 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klkb8\" (UniqueName: \"kubernetes.io/projected/bec5bfe1-9b74-494e-92e6-6482c06995b9-kube-api-access-klkb8\") pod \"dnsmasq-dns-7fd796d7df-kh2tb\" (UID: \"bec5bfe1-9b74-494e-92e6-6482c06995b9\") " pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.159432 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bec5bfe1-9b74-494e-92e6-6482c06995b9-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-kh2tb\" (UID: \"bec5bfe1-9b74-494e-92e6-6482c06995b9\") " pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.159932 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bec5bfe1-9b74-494e-92e6-6482c06995b9-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-kh2tb\" (UID: \"bec5bfe1-9b74-494e-92e6-6482c06995b9\") " pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.161910 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bec5bfe1-9b74-494e-92e6-6482c06995b9-config\") pod \"dnsmasq-dns-7fd796d7df-kh2tb\" (UID: \"bec5bfe1-9b74-494e-92e6-6482c06995b9\") " pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.190152 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klkb8\" (UniqueName: \"kubernetes.io/projected/bec5bfe1-9b74-494e-92e6-6482c06995b9-kube-api-access-klkb8\") pod \"dnsmasq-dns-7fd796d7df-kh2tb\" (UID: \"bec5bfe1-9b74-494e-92e6-6482c06995b9\") " pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" Feb 27 19:06:18 crc kubenswrapper[4981]: I0227 19:06:18.205093 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.034293 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.035719 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.037656 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.037719 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.037743 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.043977 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-vxjr4" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.048606 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.123687 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e48390e6-5fc4-4c7e-983d-8338bf663e75-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.123724 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48390e6-5fc4-4c7e-983d-8338bf663e75-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.123750 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnfns\" (UniqueName: \"kubernetes.io/projected/e48390e6-5fc4-4c7e-983d-8338bf663e75-kube-api-access-vnfns\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.123823 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e48390e6-5fc4-4c7e-983d-8338bf663e75-config\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.124097 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48390e6-5fc4-4c7e-983d-8338bf663e75-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.124161 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e48390e6-5fc4-4c7e-983d-8338bf663e75-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.124237 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.124299 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48390e6-5fc4-4c7e-983d-8338bf663e75-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.228342 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e48390e6-5fc4-4c7e-983d-8338bf663e75-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.228423 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.228451 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48390e6-5fc4-4c7e-983d-8338bf663e75-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.228498 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e48390e6-5fc4-4c7e-983d-8338bf663e75-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.228519 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48390e6-5fc4-4c7e-983d-8338bf663e75-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.228539 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnfns\" (UniqueName: \"kubernetes.io/projected/e48390e6-5fc4-4c7e-983d-8338bf663e75-kube-api-access-vnfns\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.228565 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e48390e6-5fc4-4c7e-983d-8338bf663e75-config\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.228641 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48390e6-5fc4-4c7e-983d-8338bf663e75-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.228850 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.230094 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e48390e6-5fc4-4c7e-983d-8338bf663e75-config\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.230370 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e48390e6-5fc4-4c7e-983d-8338bf663e75-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.230395 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e48390e6-5fc4-4c7e-983d-8338bf663e75-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.240357 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48390e6-5fc4-4c7e-983d-8338bf663e75-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.241934 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48390e6-5fc4-4c7e-983d-8338bf663e75-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.242800 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48390e6-5fc4-4c7e-983d-8338bf663e75-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.245921 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnfns\" (UniqueName: \"kubernetes.io/projected/e48390e6-5fc4-4c7e-983d-8338bf663e75-kube-api-access-vnfns\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.253965 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:21 crc kubenswrapper[4981]: I0227 19:06:21.362038 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 27 19:06:23 crc kubenswrapper[4981]: W0227 19:06:23.477210 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd57cb309_6812_4de2_a172_8d0896a7d864.slice/crio-b950381ad932dec85f17cece713ee3798ca0dab46a203fac4c943412d83e3254 WatchSource:0}: Error finding container b950381ad932dec85f17cece713ee3798ca0dab46a203fac4c943412d83e3254: Status 404 returned error can't find the container with id b950381ad932dec85f17cece713ee3798ca0dab46a203fac4c943412d83e3254 Feb 27 19:06:23 crc kubenswrapper[4981]: I0227 19:06:23.996861 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d57cb309-6812-4de2-a172-8d0896a7d864","Type":"ContainerStarted","Data":"b950381ad932dec85f17cece713ee3798ca0dab46a203fac4c943412d83e3254"} Feb 27 19:06:38 crc kubenswrapper[4981]: E0227 19:06:38.981304 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 27 19:06:38 crc kubenswrapper[4981]: E0227 19:06:38.982111 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m94vw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(918ffa1d-14dc-4215-ad79-e545616bcfc5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:06:38 crc kubenswrapper[4981]: E0227 19:06:38.983462 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="918ffa1d-14dc-4215-ad79-e545616bcfc5" Feb 27 19:06:39 crc kubenswrapper[4981]: E0227 19:06:39.143871 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="918ffa1d-14dc-4215-ad79-e545616bcfc5" Feb 27 19:06:41 crc kubenswrapper[4981]: E0227 19:06:41.391391 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 27 19:06:41 crc kubenswrapper[4981]: E0227 19:06:41.391572 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wkjj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(1c22e070-8348-440e-a801-64927da21e98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:06:41 crc kubenswrapper[4981]: E0227 19:06:41.392755 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="1c22e070-8348-440e-a801-64927da21e98" Feb 27 19:06:41 crc kubenswrapper[4981]: E0227 19:06:41.756741 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="1c22e070-8348-440e-a801-64927da21e98" Feb 27 19:06:48 crc kubenswrapper[4981]: E0227 19:06:48.107279 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 27 19:06:48 crc kubenswrapper[4981]: E0227 19:06:48.107693 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkpxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(f928877c-eaff-4ab4-ae3b-ba6ed721642c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:06:48 crc kubenswrapper[4981]: E0227 19:06:48.108875 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" Feb 27 19:06:48 crc kubenswrapper[4981]: E0227 19:06:48.408188 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 27 19:06:48 crc kubenswrapper[4981]: E0227 19:06:48.408347 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwswc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(991e04a2-e14a-4987-a7d8-b7f5db5cb8e3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:06:48 crc kubenswrapper[4981]: E0227 19:06:48.410351 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" Feb 27 19:06:48 crc kubenswrapper[4981]: E0227 19:06:48.961751 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" Feb 27 19:06:48 crc kubenswrapper[4981]: E0227 19:06:48.963204 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" Feb 27 19:06:54 crc kubenswrapper[4981]: E0227 19:06:54.798923 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 27 19:06:54 crc kubenswrapper[4981]: E0227 19:06:54.799545 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:nc9h569h5bch66ch67fhdfh94hfbh677h654h7chc7h5bfh66bh665h577h64dh5hd8hb7h559h658h589hffh66fh684h676hf8hbdh58ch687h698q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zs55b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(b63f8c5e-ff68-4a07-a2a5-5c3290e21669): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:06:54 crc kubenswrapper[4981]: E0227 19:06:54.801026 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="b63f8c5e-ff68-4a07-a2a5-5c3290e21669" Feb 27 19:06:55 crc kubenswrapper[4981]: E0227 19:06:55.009339 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="b63f8c5e-ff68-4a07-a2a5-5c3290e21669" Feb 27 19:07:01 crc kubenswrapper[4981]: E0227 19:07:01.700282 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 27 19:07:01 crc kubenswrapper[4981]: E0227 19:07:01.701315 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nssks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-dnkp5_openstack(dabb19c5-377f-433c-b523-68d8e1296f9b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:07:01 crc kubenswrapper[4981]: E0227 19:07:01.703393 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-dnkp5" podUID="dabb19c5-377f-433c-b523-68d8e1296f9b" Feb 27 19:07:01 crc kubenswrapper[4981]: E0227 19:07:01.726808 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 27 19:07:01 crc kubenswrapper[4981]: E0227 19:07:01.726948 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-585jt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-4s44w_openstack(44f73603-dc6b-4e0b-bbef-ab35a4783e0e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:07:01 crc kubenswrapper[4981]: E0227 19:07:01.728381 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-4s44w" podUID="44f73603-dc6b-4e0b-bbef-ab35a4783e0e" Feb 27 19:07:01 crc kubenswrapper[4981]: E0227 19:07:01.786824 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 27 19:07:01 crc kubenswrapper[4981]: E0227 19:07:01.787203 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dhxb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-tm5gc_openstack(d3d98bc1-ffa8-4522-9777-4aded2d46699): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:07:01 crc kubenswrapper[4981]: E0227 19:07:01.788290 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-tm5gc" podUID="d3d98bc1-ffa8-4522-9777-4aded2d46699" Feb 27 19:07:01 crc kubenswrapper[4981]: E0227 19:07:01.792045 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 27 19:07:01 crc kubenswrapper[4981]: E0227 19:07:01.792279 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bhk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-4r6lz_openstack(901a509c-93f9-4a67-9321-88794b748b17): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:07:01 crc kubenswrapper[4981]: E0227 19:07:01.793375 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-4r6lz" podUID="901a509c-93f9-4a67-9321-88794b748b17" Feb 27 19:07:02 crc kubenswrapper[4981]: E0227 19:07:02.090827 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" Feb 27 19:07:02 crc kubenswrapper[4981]: E0227 19:07:02.091068 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n687hd8h87h594h654h675h74hd4hcdh55h67h5d8h549hdchcdh677hd5h576h66bh678h584h5b9h5d9h5c6h5d9h5b5h567h57bhch54dh656h58dq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7mtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(d57cb309-6812-4de2-a172-8d0896a7d864): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:07:02 crc kubenswrapper[4981]: I0227 19:07:02.116328 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n5d2t"] Feb 27 19:07:02 crc kubenswrapper[4981]: E0227 19:07:02.283109 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-dnkp5" podUID="dabb19c5-377f-433c-b523-68d8e1296f9b" Feb 27 19:07:02 crc kubenswrapper[4981]: I0227 19:07:02.761135 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 19:07:03 crc kubenswrapper[4981]: I0227 19:07:03.010971 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5xwl7"] Feb 27 19:07:03 crc kubenswrapper[4981]: I0227 19:07:03.024399 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kh2tb"] Feb 27 19:07:03 crc kubenswrapper[4981]: I0227 19:07:03.295213 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e48390e6-5fc4-4c7e-983d-8338bf663e75","Type":"ContainerStarted","Data":"69a3f607a6e0f7579865d967b1d4a552c510c27fe2811ab7df2ad0839c99d9d8"} Feb 27 19:07:03 crc kubenswrapper[4981]: I0227 19:07:03.296396 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n5d2t" event={"ID":"214d65cb-9030-4093-853c-c1485fc1a30a","Type":"ContainerStarted","Data":"ba63240105611112c7eade014bae6a4f00bd9f02ecaef4b06b4fc559ad12a48b"} Feb 27 19:07:03 crc kubenswrapper[4981]: E0227 19:07:03.430502 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 27 19:07:03 crc kubenswrapper[4981]: E0227 19:07:03.430540 4981 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 27 19:07:03 crc kubenswrapper[4981]: E0227 19:07:03.430669 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9gcsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(073fb193-6587-4c6c-b20d-82a5b3075a20): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Feb 27 19:07:03 crc kubenswrapper[4981]: E0227 19:07:03.431817 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="073fb193-6587-4c6c-b20d-82a5b3075a20" Feb 27 19:07:03 crc kubenswrapper[4981]: I0227 19:07:03.653522 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-4s44w" Feb 27 19:07:03 crc kubenswrapper[4981]: I0227 19:07:03.830218 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-585jt\" (UniqueName: \"kubernetes.io/projected/44f73603-dc6b-4e0b-bbef-ab35a4783e0e-kube-api-access-585jt\") pod \"44f73603-dc6b-4e0b-bbef-ab35a4783e0e\" (UID: \"44f73603-dc6b-4e0b-bbef-ab35a4783e0e\") " Feb 27 19:07:03 crc kubenswrapper[4981]: I0227 19:07:03.830456 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44f73603-dc6b-4e0b-bbef-ab35a4783e0e-config\") pod \"44f73603-dc6b-4e0b-bbef-ab35a4783e0e\" (UID: \"44f73603-dc6b-4e0b-bbef-ab35a4783e0e\") " Feb 27 19:07:03 crc kubenswrapper[4981]: I0227 19:07:03.830502 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44f73603-dc6b-4e0b-bbef-ab35a4783e0e-dns-svc\") pod \"44f73603-dc6b-4e0b-bbef-ab35a4783e0e\" (UID: \"44f73603-dc6b-4e0b-bbef-ab35a4783e0e\") " Feb 27 19:07:03 crc kubenswrapper[4981]: I0227 19:07:03.831041 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44f73603-dc6b-4e0b-bbef-ab35a4783e0e-config" (OuterVolumeSpecName: "config") pod "44f73603-dc6b-4e0b-bbef-ab35a4783e0e" (UID: "44f73603-dc6b-4e0b-bbef-ab35a4783e0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:03 crc kubenswrapper[4981]: I0227 19:07:03.831193 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44f73603-dc6b-4e0b-bbef-ab35a4783e0e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44f73603-dc6b-4e0b-bbef-ab35a4783e0e" (UID: "44f73603-dc6b-4e0b-bbef-ab35a4783e0e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:03 crc kubenswrapper[4981]: I0227 19:07:03.833431 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44f73603-dc6b-4e0b-bbef-ab35a4783e0e-kube-api-access-585jt" (OuterVolumeSpecName: "kube-api-access-585jt") pod "44f73603-dc6b-4e0b-bbef-ab35a4783e0e" (UID: "44f73603-dc6b-4e0b-bbef-ab35a4783e0e"). InnerVolumeSpecName "kube-api-access-585jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:07:03 crc kubenswrapper[4981]: I0227 19:07:03.932306 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44f73603-dc6b-4e0b-bbef-ab35a4783e0e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:03 crc kubenswrapper[4981]: I0227 19:07:03.932336 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-585jt\" (UniqueName: \"kubernetes.io/projected/44f73603-dc6b-4e0b-bbef-ab35a4783e0e-kube-api-access-585jt\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:03 crc kubenswrapper[4981]: I0227 19:07:03.932398 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44f73603-dc6b-4e0b-bbef-ab35a4783e0e-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:03.977488 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4r6lz" Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:03.986238 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tm5gc" Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.134659 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d98bc1-ffa8-4522-9777-4aded2d46699-config\") pod \"d3d98bc1-ffa8-4522-9777-4aded2d46699\" (UID: \"d3d98bc1-ffa8-4522-9777-4aded2d46699\") " Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.134740 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901a509c-93f9-4a67-9321-88794b748b17-config\") pod \"901a509c-93f9-4a67-9321-88794b748b17\" (UID: \"901a509c-93f9-4a67-9321-88794b748b17\") " Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.134832 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/901a509c-93f9-4a67-9321-88794b748b17-dns-svc\") pod \"901a509c-93f9-4a67-9321-88794b748b17\" (UID: \"901a509c-93f9-4a67-9321-88794b748b17\") " Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.134874 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bhk8\" (UniqueName: \"kubernetes.io/projected/901a509c-93f9-4a67-9321-88794b748b17-kube-api-access-8bhk8\") pod \"901a509c-93f9-4a67-9321-88794b748b17\" (UID: \"901a509c-93f9-4a67-9321-88794b748b17\") " Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.134995 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhxb8\" (UniqueName: \"kubernetes.io/projected/d3d98bc1-ffa8-4522-9777-4aded2d46699-kube-api-access-dhxb8\") pod \"d3d98bc1-ffa8-4522-9777-4aded2d46699\" (UID: \"d3d98bc1-ffa8-4522-9777-4aded2d46699\") " Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.135047 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3d98bc1-ffa8-4522-9777-4aded2d46699-config" (OuterVolumeSpecName: "config") pod "d3d98bc1-ffa8-4522-9777-4aded2d46699" (UID: "d3d98bc1-ffa8-4522-9777-4aded2d46699"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.135342 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d98bc1-ffa8-4522-9777-4aded2d46699-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.135383 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/901a509c-93f9-4a67-9321-88794b748b17-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "901a509c-93f9-4a67-9321-88794b748b17" (UID: "901a509c-93f9-4a67-9321-88794b748b17"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.136402 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/901a509c-93f9-4a67-9321-88794b748b17-config" (OuterVolumeSpecName: "config") pod "901a509c-93f9-4a67-9321-88794b748b17" (UID: "901a509c-93f9-4a67-9321-88794b748b17"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.217288 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d98bc1-ffa8-4522-9777-4aded2d46699-kube-api-access-dhxb8" (OuterVolumeSpecName: "kube-api-access-dhxb8") pod "d3d98bc1-ffa8-4522-9777-4aded2d46699" (UID: "d3d98bc1-ffa8-4522-9777-4aded2d46699"). InnerVolumeSpecName "kube-api-access-dhxb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.218595 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/901a509c-93f9-4a67-9321-88794b748b17-kube-api-access-8bhk8" (OuterVolumeSpecName: "kube-api-access-8bhk8") pod "901a509c-93f9-4a67-9321-88794b748b17" (UID: "901a509c-93f9-4a67-9321-88794b748b17"). InnerVolumeSpecName "kube-api-access-8bhk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.238533 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901a509c-93f9-4a67-9321-88794b748b17-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.238578 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/901a509c-93f9-4a67-9321-88794b748b17-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.238599 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bhk8\" (UniqueName: \"kubernetes.io/projected/901a509c-93f9-4a67-9321-88794b748b17-kube-api-access-8bhk8\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.238619 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhxb8\" (UniqueName: \"kubernetes.io/projected/d3d98bc1-ffa8-4522-9777-4aded2d46699-kube-api-access-dhxb8\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.306455 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-4s44w" event={"ID":"44f73603-dc6b-4e0b-bbef-ab35a4783e0e","Type":"ContainerDied","Data":"cd2056e9f044acb57439bae1a206642716fe564ea703d6d4c4b8a29e44a6cf4d"} Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.306584 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-4s44w" Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.322974 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5xwl7" event={"ID":"a1d85462-e999-48fc-8c36-ce8bbe60ed3d","Type":"ContainerStarted","Data":"97d198b6d642657f2ee2d06b0e584af11befc27e2a5cfae696491be27e0596c6"} Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.324833 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"918ffa1d-14dc-4215-ad79-e545616bcfc5","Type":"ContainerStarted","Data":"fa18ada0fdd4fbd7e4904a65bca4de4d6dfbc3eb64c989b47c9399379c99d8be"} Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.327892 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-tm5gc" event={"ID":"d3d98bc1-ffa8-4522-9777-4aded2d46699","Type":"ContainerDied","Data":"869f2b63bcf6e45ef30c8775df0ec9b8d223a760d218d0a1c67aacd8a65158b7"} Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.327921 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-tm5gc" Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.329507 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-4r6lz" Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.329462 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-4r6lz" event={"ID":"901a509c-93f9-4a67-9321-88794b748b17","Type":"ContainerDied","Data":"321bc351a8a7bb81f6a4493c5cc431ad08711b2af1c27e3cb1cb594c5d33f37d"} Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.330603 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" event={"ID":"bec5bfe1-9b74-494e-92e6-6482c06995b9","Type":"ContainerStarted","Data":"718deef57c61884e17af1d31235adceab41647fafef506096ca012123183087e"} Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.332177 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1c22e070-8348-440e-a801-64927da21e98","Type":"ContainerStarted","Data":"b8bfb43832e3fd67b407a361e72e206020352adf5ad9c8cc0f364e5cfb240b8c"} Feb 27 19:07:04 crc kubenswrapper[4981]: E0227 19:07:04.333018 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="073fb193-6587-4c6c-b20d-82a5b3075a20" Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.365790 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-4s44w"] Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.371487 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-4s44w"] Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.438201 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tm5gc"] Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.457837 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-tm5gc"] Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.971204 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4r6lz"] Feb 27 19:07:04 crc kubenswrapper[4981]: I0227 19:07:04.978423 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-4r6lz"] Feb 27 19:07:05 crc kubenswrapper[4981]: I0227 19:07:05.343902 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3","Type":"ContainerStarted","Data":"739bf9a3ac6ea571ec9bca214592f39bd326faa6890041f869de8b117f30fc3d"} Feb 27 19:07:05 crc kubenswrapper[4981]: I0227 19:07:05.346460 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f928877c-eaff-4ab4-ae3b-ba6ed721642c","Type":"ContainerStarted","Data":"d378fa26bc8f0c5f0f946f4dfecf68788a807fe6b3c792500d882ea0dd773eb9"} Feb 27 19:07:05 crc kubenswrapper[4981]: I0227 19:07:05.639720 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44f73603-dc6b-4e0b-bbef-ab35a4783e0e" path="/var/lib/kubelet/pods/44f73603-dc6b-4e0b-bbef-ab35a4783e0e/volumes" Feb 27 19:07:05 crc kubenswrapper[4981]: I0227 19:07:05.640098 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="901a509c-93f9-4a67-9321-88794b748b17" path="/var/lib/kubelet/pods/901a509c-93f9-4a67-9321-88794b748b17/volumes" Feb 27 19:07:05 crc kubenswrapper[4981]: I0227 19:07:05.664209 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3d98bc1-ffa8-4522-9777-4aded2d46699" path="/var/lib/kubelet/pods/d3d98bc1-ffa8-4522-9777-4aded2d46699/volumes" Feb 27 19:07:06 crc kubenswrapper[4981]: I0227 19:07:06.353263 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n5d2t" event={"ID":"214d65cb-9030-4093-853c-c1485fc1a30a","Type":"ContainerStarted","Data":"1d7e957ec0b2bda077dee170917b2cb8f7028331f4696bcedbf1e0135c091783"} Feb 27 19:07:06 crc kubenswrapper[4981]: I0227 19:07:06.354634 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-n5d2t" Feb 27 19:07:06 crc kubenswrapper[4981]: I0227 19:07:06.358349 4981 generic.go:334] "Generic (PLEG): container finished" podID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerID="787cfc4f63fedaed0585d22d5e64190ea52cda576c2784f0c43fce945146b360" exitCode=0 Feb 27 19:07:06 crc kubenswrapper[4981]: I0227 19:07:06.358389 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5xwl7" event={"ID":"a1d85462-e999-48fc-8c36-ce8bbe60ed3d","Type":"ContainerDied","Data":"787cfc4f63fedaed0585d22d5e64190ea52cda576c2784f0c43fce945146b360"} Feb 27 19:07:06 crc kubenswrapper[4981]: I0227 19:07:06.359924 4981 generic.go:334] "Generic (PLEG): container finished" podID="bec5bfe1-9b74-494e-92e6-6482c06995b9" containerID="14230eb3667a747b265f98dd4b8981a8801be4fef7075b9f388f5993e58f4075" exitCode=0 Feb 27 19:07:06 crc kubenswrapper[4981]: I0227 19:07:06.359953 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" event={"ID":"bec5bfe1-9b74-494e-92e6-6482c06995b9","Type":"ContainerDied","Data":"14230eb3667a747b265f98dd4b8981a8801be4fef7075b9f388f5993e58f4075"} Feb 27 19:07:06 crc kubenswrapper[4981]: I0227 19:07:06.383314 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-n5d2t" podStartSLOduration=47.101492659 podStartE2EDuration="49.383287568s" podCreationTimestamp="2026-02-27 19:06:17 +0000 UTC" firstStartedPulling="2026-02-27 19:07:03.27987172 +0000 UTC m=+1322.758652880" lastFinishedPulling="2026-02-27 19:07:05.561666619 +0000 UTC m=+1325.040447789" observedRunningTime="2026-02-27 19:07:06.376639587 +0000 UTC m=+1325.855420747" watchObservedRunningTime="2026-02-27 19:07:06.383287568 +0000 UTC m=+1325.862068748" Feb 27 19:07:07 crc kubenswrapper[4981]: I0227 19:07:07.390895 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" event={"ID":"bec5bfe1-9b74-494e-92e6-6482c06995b9","Type":"ContainerStarted","Data":"3628d94c5816c499aa00ef37464a0e2d10ee1c7be9bdc0223c85be0c216783e0"} Feb 27 19:07:07 crc kubenswrapper[4981]: I0227 19:07:07.391226 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" Feb 27 19:07:07 crc kubenswrapper[4981]: I0227 19:07:07.396072 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5xwl7" event={"ID":"a1d85462-e999-48fc-8c36-ce8bbe60ed3d","Type":"ContainerStarted","Data":"2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17"} Feb 27 19:07:07 crc kubenswrapper[4981]: I0227 19:07:07.413173 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" podStartSLOduration=48.68275982 podStartE2EDuration="50.413148493s" podCreationTimestamp="2026-02-27 19:06:17 +0000 UTC" firstStartedPulling="2026-02-27 19:07:03.428518873 +0000 UTC m=+1322.907300033" lastFinishedPulling="2026-02-27 19:07:05.158907546 +0000 UTC m=+1324.637688706" observedRunningTime="2026-02-27 19:07:07.406179302 +0000 UTC m=+1326.884960462" watchObservedRunningTime="2026-02-27 19:07:07.413148493 +0000 UTC m=+1326.891929663" Feb 27 19:07:09 crc kubenswrapper[4981]: I0227 19:07:09.188833 4981 scope.go:117] "RemoveContainer" containerID="1ec792a976db7e4b656154acd87e34d37f61a528243e6e09fe73ec1e9140c159" Feb 27 19:07:09 crc kubenswrapper[4981]: I0227 19:07:09.415372 4981 generic.go:334] "Generic (PLEG): container finished" podID="918ffa1d-14dc-4215-ad79-e545616bcfc5" containerID="fa18ada0fdd4fbd7e4904a65bca4de4d6dfbc3eb64c989b47c9399379c99d8be" exitCode=0 Feb 27 19:07:09 crc kubenswrapper[4981]: I0227 19:07:09.415440 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"918ffa1d-14dc-4215-ad79-e545616bcfc5","Type":"ContainerDied","Data":"fa18ada0fdd4fbd7e4904a65bca4de4d6dfbc3eb64c989b47c9399379c99d8be"} Feb 27 19:07:09 crc kubenswrapper[4981]: I0227 19:07:09.417073 4981 generic.go:334] "Generic (PLEG): container finished" podID="1c22e070-8348-440e-a801-64927da21e98" containerID="b8bfb43832e3fd67b407a361e72e206020352adf5ad9c8cc0f364e5cfb240b8c" exitCode=0 Feb 27 19:07:09 crc kubenswrapper[4981]: I0227 19:07:09.417097 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1c22e070-8348-440e-a801-64927da21e98","Type":"ContainerDied","Data":"b8bfb43832e3fd67b407a361e72e206020352adf5ad9c8cc0f364e5cfb240b8c"} Feb 27 19:07:09 crc kubenswrapper[4981]: E0227 19:07:09.725778 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="d57cb309-6812-4de2-a172-8d0896a7d864" Feb 27 19:07:10 crc kubenswrapper[4981]: I0227 19:07:10.429197 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"918ffa1d-14dc-4215-ad79-e545616bcfc5","Type":"ContainerStarted","Data":"2383d387853c55d3b03208088009493504f3c3e88fd1ec79f4ffae6e55db5669"} Feb 27 19:07:10 crc kubenswrapper[4981]: I0227 19:07:10.433943 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5xwl7" event={"ID":"a1d85462-e999-48fc-8c36-ce8bbe60ed3d","Type":"ContainerStarted","Data":"ee6fb96a4b332552632dd9dc8737353181f1aa6fa1493b16b542b6e241c02773"} Feb 27 19:07:10 crc kubenswrapper[4981]: I0227 19:07:10.434288 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:07:10 crc kubenswrapper[4981]: I0227 19:07:10.436091 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b63f8c5e-ff68-4a07-a2a5-5c3290e21669","Type":"ContainerStarted","Data":"2831b1fbd633eee20cda167168b129b60e56a04ba92a023a388d553dde52965e"} Feb 27 19:07:10 crc kubenswrapper[4981]: I0227 19:07:10.436363 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 27 19:07:10 crc kubenswrapper[4981]: I0227 19:07:10.438013 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d57cb309-6812-4de2-a172-8d0896a7d864","Type":"ContainerStarted","Data":"ca540b7d5b9796b148a47711ca823c44cdded0771435a1f7d0fe11fc82e0c7d3"} Feb 27 19:07:10 crc kubenswrapper[4981]: E0227 19:07:10.442128 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="d57cb309-6812-4de2-a172-8d0896a7d864" Feb 27 19:07:10 crc kubenswrapper[4981]: I0227 19:07:10.442379 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e48390e6-5fc4-4c7e-983d-8338bf663e75","Type":"ContainerStarted","Data":"dd5a437e89f1984f0479ec8286bb8061f357082ffc23b35bd6f382a0898da54c"} Feb 27 19:07:10 crc kubenswrapper[4981]: I0227 19:07:10.442420 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e48390e6-5fc4-4c7e-983d-8338bf663e75","Type":"ContainerStarted","Data":"4fed5c77f575f747ac150d5541be3f78f7462e24e36ffd8348183eb7cc147164"} Feb 27 19:07:10 crc kubenswrapper[4981]: I0227 19:07:10.445222 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1c22e070-8348-440e-a801-64927da21e98","Type":"ContainerStarted","Data":"156db9f0e659f08402952be0d8b3b765d9002fac585f7beceaa66f2923a4c3d1"} Feb 27 19:07:10 crc kubenswrapper[4981]: I0227 19:07:10.467640 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.608833903 podStartE2EDuration="1m2.467617201s" podCreationTimestamp="2026-02-27 19:06:08 +0000 UTC" firstStartedPulling="2026-02-27 19:06:10.5636427 +0000 UTC m=+1270.042423860" lastFinishedPulling="2026-02-27 19:07:03.422425978 +0000 UTC m=+1322.901207158" observedRunningTime="2026-02-27 19:07:10.454528976 +0000 UTC m=+1329.933310176" watchObservedRunningTime="2026-02-27 19:07:10.467617201 +0000 UTC m=+1329.946398391" Feb 27 19:07:10 crc kubenswrapper[4981]: I0227 19:07:10.479454 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.970098318 podStartE2EDuration="58.479433449s" podCreationTimestamp="2026-02-27 19:06:12 +0000 UTC" firstStartedPulling="2026-02-27 19:06:13.014342573 +0000 UTC m=+1272.493123733" lastFinishedPulling="2026-02-27 19:07:09.523677704 +0000 UTC m=+1329.002458864" observedRunningTime="2026-02-27 19:07:10.475401916 +0000 UTC m=+1329.954183086" watchObservedRunningTime="2026-02-27 19:07:10.479433449 +0000 UTC m=+1329.958214619" Feb 27 19:07:10 crc kubenswrapper[4981]: I0227 19:07:10.503997 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-5xwl7" podStartSLOduration=51.79160557 podStartE2EDuration="53.50398023s" podCreationTimestamp="2026-02-27 19:06:17 +0000 UTC" firstStartedPulling="2026-02-27 19:07:03.44698305 +0000 UTC m=+1322.925764240" lastFinishedPulling="2026-02-27 19:07:05.15935774 +0000 UTC m=+1324.638138900" observedRunningTime="2026-02-27 19:07:10.499381441 +0000 UTC m=+1329.978162601" watchObservedRunningTime="2026-02-27 19:07:10.50398023 +0000 UTC m=+1329.982761400" Feb 27 19:07:10 crc kubenswrapper[4981]: I0227 19:07:10.557600 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.874709579 podStartE2EDuration="1m0.55757617s" podCreationTimestamp="2026-02-27 19:06:10 +0000 UTC" firstStartedPulling="2026-02-27 19:06:12.761999136 +0000 UTC m=+1272.240780306" lastFinishedPulling="2026-02-27 19:07:03.444865717 +0000 UTC m=+1322.923646897" observedRunningTime="2026-02-27 19:07:10.547202837 +0000 UTC m=+1330.025984007" watchObservedRunningTime="2026-02-27 19:07:10.55757617 +0000 UTC m=+1330.036357370" Feb 27 19:07:10 crc kubenswrapper[4981]: I0227 19:07:10.575468 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=44.251940786 podStartE2EDuration="50.57544188s" podCreationTimestamp="2026-02-27 19:06:20 +0000 UTC" firstStartedPulling="2026-02-27 19:07:03.123641128 +0000 UTC m=+1322.602422288" lastFinishedPulling="2026-02-27 19:07:09.447142222 +0000 UTC m=+1328.925923382" observedRunningTime="2026-02-27 19:07:10.564494399 +0000 UTC m=+1330.043275599" watchObservedRunningTime="2026-02-27 19:07:10.57544188 +0000 UTC m=+1330.054223070" Feb 27 19:07:11 crc kubenswrapper[4981]: I0227 19:07:11.362641 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 27 19:07:11 crc kubenswrapper[4981]: I0227 19:07:11.452148 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:07:11 crc kubenswrapper[4981]: E0227 19:07:11.453650 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="d57cb309-6812-4de2-a172-8d0896a7d864" Feb 27 19:07:12 crc kubenswrapper[4981]: I0227 19:07:12.224591 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 27 19:07:12 crc kubenswrapper[4981]: I0227 19:07:12.224659 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 27 19:07:12 crc kubenswrapper[4981]: I0227 19:07:12.363451 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 27 19:07:12 crc kubenswrapper[4981]: I0227 19:07:12.413024 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 27 19:07:13 crc kubenswrapper[4981]: I0227 19:07:13.207189 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" Feb 27 19:07:13 crc kubenswrapper[4981]: I0227 19:07:13.280594 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dnkp5"] Feb 27 19:07:13 crc kubenswrapper[4981]: I0227 19:07:13.571142 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-dnkp5" Feb 27 19:07:13 crc kubenswrapper[4981]: I0227 19:07:13.691407 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nssks\" (UniqueName: \"kubernetes.io/projected/dabb19c5-377f-433c-b523-68d8e1296f9b-kube-api-access-nssks\") pod \"dabb19c5-377f-433c-b523-68d8e1296f9b\" (UID: \"dabb19c5-377f-433c-b523-68d8e1296f9b\") " Feb 27 19:07:13 crc kubenswrapper[4981]: I0227 19:07:13.691548 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dabb19c5-377f-433c-b523-68d8e1296f9b-dns-svc\") pod \"dabb19c5-377f-433c-b523-68d8e1296f9b\" (UID: \"dabb19c5-377f-433c-b523-68d8e1296f9b\") " Feb 27 19:07:13 crc kubenswrapper[4981]: I0227 19:07:13.691719 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabb19c5-377f-433c-b523-68d8e1296f9b-config\") pod \"dabb19c5-377f-433c-b523-68d8e1296f9b\" (UID: \"dabb19c5-377f-433c-b523-68d8e1296f9b\") " Feb 27 19:07:13 crc kubenswrapper[4981]: I0227 19:07:13.692141 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dabb19c5-377f-433c-b523-68d8e1296f9b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dabb19c5-377f-433c-b523-68d8e1296f9b" (UID: "dabb19c5-377f-433c-b523-68d8e1296f9b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:13 crc kubenswrapper[4981]: I0227 19:07:13.692265 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dabb19c5-377f-433c-b523-68d8e1296f9b-config" (OuterVolumeSpecName: "config") pod "dabb19c5-377f-433c-b523-68d8e1296f9b" (UID: "dabb19c5-377f-433c-b523-68d8e1296f9b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:13 crc kubenswrapper[4981]: I0227 19:07:13.720465 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dabb19c5-377f-433c-b523-68d8e1296f9b-kube-api-access-nssks" (OuterVolumeSpecName: "kube-api-access-nssks") pod "dabb19c5-377f-433c-b523-68d8e1296f9b" (UID: "dabb19c5-377f-433c-b523-68d8e1296f9b"). InnerVolumeSpecName "kube-api-access-nssks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:07:13 crc kubenswrapper[4981]: I0227 19:07:13.795609 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dabb19c5-377f-433c-b523-68d8e1296f9b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:13 crc kubenswrapper[4981]: I0227 19:07:13.796049 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabb19c5-377f-433c-b523-68d8e1296f9b-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:13 crc kubenswrapper[4981]: I0227 19:07:13.796096 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nssks\" (UniqueName: \"kubernetes.io/projected/dabb19c5-377f-433c-b523-68d8e1296f9b-kube-api-access-nssks\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:14 crc kubenswrapper[4981]: I0227 19:07:14.481449 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-dnkp5" event={"ID":"dabb19c5-377f-433c-b523-68d8e1296f9b","Type":"ContainerDied","Data":"e9cf6201f7011c59c0727729c169e55386d1191a84161a2ad9d706a1a98024b6"} Feb 27 19:07:14 crc kubenswrapper[4981]: I0227 19:07:14.481506 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-dnkp5" Feb 27 19:07:14 crc kubenswrapper[4981]: I0227 19:07:14.547708 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dnkp5"] Feb 27 19:07:14 crc kubenswrapper[4981]: I0227 19:07:14.553017 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-dnkp5"] Feb 27 19:07:14 crc kubenswrapper[4981]: I0227 19:07:14.750637 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 27 19:07:14 crc kubenswrapper[4981]: I0227 19:07:14.852162 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 27 19:07:15 crc kubenswrapper[4981]: I0227 19:07:15.646272 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dabb19c5-377f-433c-b523-68d8e1296f9b" path="/var/lib/kubelet/pods/dabb19c5-377f-433c-b523-68d8e1296f9b/volumes" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.402821 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.650308 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8lsxr"] Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.651836 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.655694 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.669295 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8lsxr"] Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.691207 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-6bhp6"] Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.692083 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.693973 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.710477 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6bhp6"] Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.752392 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-8lsxr\" (UID: \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.752489 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-8lsxr\" (UID: \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.752563 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k95zj\" (UniqueName: \"kubernetes.io/projected/fcba892b-5905-40ec-a2cf-14e71aeba8c1-kube-api-access-k95zj\") pod \"dnsmasq-dns-86db49b7ff-8lsxr\" (UID: \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.752591 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-8lsxr\" (UID: \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.752611 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-config\") pod \"dnsmasq-dns-86db49b7ff-8lsxr\" (UID: \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.853791 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqgtz\" (UniqueName: \"kubernetes.io/projected/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-kube-api-access-lqgtz\") pod \"ovn-controller-metrics-6bhp6\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.853842 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-8lsxr\" (UID: \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.853899 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k95zj\" (UniqueName: \"kubernetes.io/projected/fcba892b-5905-40ec-a2cf-14e71aeba8c1-kube-api-access-k95zj\") pod \"dnsmasq-dns-86db49b7ff-8lsxr\" (UID: \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.853919 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-ovs-rundir\") pod \"ovn-controller-metrics-6bhp6\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.853941 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-8lsxr\" (UID: \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.853961 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-config\") pod \"dnsmasq-dns-86db49b7ff-8lsxr\" (UID: \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.853984 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-ovn-rundir\") pod \"ovn-controller-metrics-6bhp6\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.854000 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-combined-ca-bundle\") pod \"ovn-controller-metrics-6bhp6\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.854033 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-8lsxr\" (UID: \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.854054 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-config\") pod \"ovn-controller-metrics-6bhp6\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.854362 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6bhp6\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.854907 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-config\") pod \"dnsmasq-dns-86db49b7ff-8lsxr\" (UID: \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.855027 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-8lsxr\" (UID: \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.855165 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-8lsxr\" (UID: \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.855169 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-8lsxr\" (UID: \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.873006 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k95zj\" (UniqueName: \"kubernetes.io/projected/fcba892b-5905-40ec-a2cf-14e71aeba8c1-kube-api-access-k95zj\") pod \"dnsmasq-dns-86db49b7ff-8lsxr\" (UID: \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.955928 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqgtz\" (UniqueName: \"kubernetes.io/projected/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-kube-api-access-lqgtz\") pod \"ovn-controller-metrics-6bhp6\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.956027 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-ovs-rundir\") pod \"ovn-controller-metrics-6bhp6\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.956073 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-ovn-rundir\") pod \"ovn-controller-metrics-6bhp6\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.956091 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-combined-ca-bundle\") pod \"ovn-controller-metrics-6bhp6\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.956136 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-config\") pod \"ovn-controller-metrics-6bhp6\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.956180 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6bhp6\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.956363 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-ovn-rundir\") pod \"ovn-controller-metrics-6bhp6\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.956383 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-ovs-rundir\") pod \"ovn-controller-metrics-6bhp6\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.956940 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-config\") pod \"ovn-controller-metrics-6bhp6\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.959681 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6bhp6\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.961393 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-combined-ca-bundle\") pod \"ovn-controller-metrics-6bhp6\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.970001 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" Feb 27 19:07:16 crc kubenswrapper[4981]: I0227 19:07:16.970905 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqgtz\" (UniqueName: \"kubernetes.io/projected/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-kube-api-access-lqgtz\") pod \"ovn-controller-metrics-6bhp6\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:07:17 crc kubenswrapper[4981]: I0227 19:07:17.011889 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:07:17 crc kubenswrapper[4981]: I0227 19:07:17.558197 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 27 19:07:17 crc kubenswrapper[4981]: I0227 19:07:17.571312 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8lsxr"] Feb 27 19:07:17 crc kubenswrapper[4981]: I0227 19:07:17.610194 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6bhp6"] Feb 27 19:07:17 crc kubenswrapper[4981]: W0227 19:07:17.631383 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaec1d5c5_b41c_4d8b_9810_04a25a18c1b1.slice/crio-96d32469e5b65e47e87c0bdb3399131cc17cdd50b6cf2ac97e29d611fd8c9f34 WatchSource:0}: Error finding container 96d32469e5b65e47e87c0bdb3399131cc17cdd50b6cf2ac97e29d611fd8c9f34: Status 404 returned error can't find the container with id 96d32469e5b65e47e87c0bdb3399131cc17cdd50b6cf2ac97e29d611fd8c9f34 Feb 27 19:07:18 crc kubenswrapper[4981]: I0227 19:07:18.525181 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" event={"ID":"fcba892b-5905-40ec-a2cf-14e71aeba8c1","Type":"ContainerStarted","Data":"5ccd439c0236c0fd87d5d5170d8ddc8da0b3f368ea8eed38ef9f510c5b01137e"} Feb 27 19:07:18 crc kubenswrapper[4981]: I0227 19:07:18.525239 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" event={"ID":"fcba892b-5905-40ec-a2cf-14e71aeba8c1","Type":"ContainerStarted","Data":"250a44d25b0e19e5f5d277cc6af95a0d535bdbd51bc6112d66ac3b86b4700c91"} Feb 27 19:07:18 crc kubenswrapper[4981]: I0227 19:07:18.527767 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6bhp6" event={"ID":"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1","Type":"ContainerStarted","Data":"96d32469e5b65e47e87c0bdb3399131cc17cdd50b6cf2ac97e29d611fd8c9f34"} Feb 27 19:07:21 crc kubenswrapper[4981]: I0227 19:07:21.268907 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 27 19:07:21 crc kubenswrapper[4981]: I0227 19:07:21.271565 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:07:21 crc kubenswrapper[4981]: I0227 19:07:21.271620 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:07:21 crc kubenswrapper[4981]: I0227 19:07:21.338505 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6bhp6" event={"ID":"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1","Type":"ContainerStarted","Data":"52127a29349e4ff336f88e8c487c475618ad44fe9d5adba506f6a7274fdd5580"} Feb 27 19:07:21 crc kubenswrapper[4981]: I0227 19:07:21.338570 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 27 19:07:21 crc kubenswrapper[4981]: I0227 19:07:21.702431 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-kqzmw"] Feb 27 19:07:21 crc kubenswrapper[4981]: I0227 19:07:21.703399 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kqzmw" Feb 27 19:07:21 crc kubenswrapper[4981]: I0227 19:07:21.708176 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 27 19:07:21 crc kubenswrapper[4981]: I0227 19:07:21.850938 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kp9b\" (UniqueName: \"kubernetes.io/projected/9e5852d1-68b9-4547-8cc3-b3170875303f-kube-api-access-7kp9b\") pod \"root-account-create-update-kqzmw\" (UID: \"9e5852d1-68b9-4547-8cc3-b3170875303f\") " pod="openstack/root-account-create-update-kqzmw" Feb 27 19:07:21 crc kubenswrapper[4981]: I0227 19:07:21.851443 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e5852d1-68b9-4547-8cc3-b3170875303f-operator-scripts\") pod \"root-account-create-update-kqzmw\" (UID: \"9e5852d1-68b9-4547-8cc3-b3170875303f\") " pod="openstack/root-account-create-update-kqzmw" Feb 27 19:07:22 crc kubenswrapper[4981]: I0227 19:07:22.063764 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kp9b\" (UniqueName: \"kubernetes.io/projected/9e5852d1-68b9-4547-8cc3-b3170875303f-kube-api-access-7kp9b\") pod \"root-account-create-update-kqzmw\" (UID: \"9e5852d1-68b9-4547-8cc3-b3170875303f\") " pod="openstack/root-account-create-update-kqzmw" Feb 27 19:07:22 crc kubenswrapper[4981]: I0227 19:07:22.063881 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e5852d1-68b9-4547-8cc3-b3170875303f-operator-scripts\") pod \"root-account-create-update-kqzmw\" (UID: \"9e5852d1-68b9-4547-8cc3-b3170875303f\") " pod="openstack/root-account-create-update-kqzmw" Feb 27 19:07:22 crc kubenswrapper[4981]: I0227 19:07:22.065073 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e5852d1-68b9-4547-8cc3-b3170875303f-operator-scripts\") pod \"root-account-create-update-kqzmw\" (UID: \"9e5852d1-68b9-4547-8cc3-b3170875303f\") " pod="openstack/root-account-create-update-kqzmw" Feb 27 19:07:22 crc kubenswrapper[4981]: I0227 19:07:22.081394 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kqzmw"] Feb 27 19:07:22 crc kubenswrapper[4981]: I0227 19:07:22.097269 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kp9b\" (UniqueName: \"kubernetes.io/projected/9e5852d1-68b9-4547-8cc3-b3170875303f-kube-api-access-7kp9b\") pod \"root-account-create-update-kqzmw\" (UID: \"9e5852d1-68b9-4547-8cc3-b3170875303f\") " pod="openstack/root-account-create-update-kqzmw" Feb 27 19:07:22 crc kubenswrapper[4981]: I0227 19:07:22.157603 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kqzmw" Feb 27 19:07:22 crc kubenswrapper[4981]: I0227 19:07:22.357445 4981 generic.go:334] "Generic (PLEG): container finished" podID="fcba892b-5905-40ec-a2cf-14e71aeba8c1" containerID="5ccd439c0236c0fd87d5d5170d8ddc8da0b3f368ea8eed38ef9f510c5b01137e" exitCode=0 Feb 27 19:07:22 crc kubenswrapper[4981]: I0227 19:07:22.358114 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" event={"ID":"fcba892b-5905-40ec-a2cf-14e71aeba8c1","Type":"ContainerDied","Data":"5ccd439c0236c0fd87d5d5170d8ddc8da0b3f368ea8eed38ef9f510c5b01137e"} Feb 27 19:07:23 crc kubenswrapper[4981]: I0227 19:07:23.197595 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kqzmw"] Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:23.364411 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" event={"ID":"fcba892b-5905-40ec-a2cf-14e71aeba8c1","Type":"ContainerStarted","Data":"d5bea82b461062ec73a068e1c79214fa87c4ce1664bda7fbba0200fe8e05c16a"} Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:23.366106 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kqzmw" event={"ID":"9e5852d1-68b9-4547-8cc3-b3170875303f","Type":"ContainerStarted","Data":"9822b357e6dc4667b7b48452f352443b048fa8ba5a7eefdc17f22c60fd37c85b"} Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.191935 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-6bhp6" podStartSLOduration=8.191912092 podStartE2EDuration="8.191912092s" podCreationTimestamp="2026-02-27 19:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:07:23.714133193 +0000 UTC m=+1343.192914413" watchObservedRunningTime="2026-02-27 19:07:24.191912092 +0000 UTC m=+1343.670693252" Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.199255 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8lsxr"] Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.229292 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-cff9l"] Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.238022 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.253146 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cff9l"] Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.308296 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-cff9l\" (UID: \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\") " pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.308613 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-config\") pod \"dnsmasq-dns-698758b865-cff9l\" (UID: \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\") " pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.308703 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-dns-svc\") pod \"dnsmasq-dns-698758b865-cff9l\" (UID: \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\") " pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.308730 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgjsb\" (UniqueName: \"kubernetes.io/projected/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-kube-api-access-mgjsb\") pod \"dnsmasq-dns-698758b865-cff9l\" (UID: \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\") " pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.308785 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-cff9l\" (UID: \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\") " pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.372934 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kqzmw" event={"ID":"9e5852d1-68b9-4547-8cc3-b3170875303f","Type":"ContainerStarted","Data":"889670a03574258a741b7a2e3e7d293f4321e0eee5a13978816a46df965fa41a"} Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.373038 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.410112 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-config\") pod \"dnsmasq-dns-698758b865-cff9l\" (UID: \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\") " pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.410160 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-dns-svc\") pod \"dnsmasq-dns-698758b865-cff9l\" (UID: \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\") " pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.410179 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgjsb\" (UniqueName: \"kubernetes.io/projected/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-kube-api-access-mgjsb\") pod \"dnsmasq-dns-698758b865-cff9l\" (UID: \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\") " pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.410206 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-cff9l\" (UID: \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\") " pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.410248 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-cff9l\" (UID: \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\") " pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.411110 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-cff9l\" (UID: \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\") " pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.411770 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-cff9l\" (UID: \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\") " pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.412043 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-config\") pod \"dnsmasq-dns-698758b865-cff9l\" (UID: \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\") " pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.412407 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-dns-svc\") pod \"dnsmasq-dns-698758b865-cff9l\" (UID: \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\") " pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.427862 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgjsb\" (UniqueName: \"kubernetes.io/projected/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-kube-api-access-mgjsb\") pod \"dnsmasq-dns-698758b865-cff9l\" (UID: \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\") " pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:07:24 crc kubenswrapper[4981]: I0227 19:07:24.561097 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:07:25 crc kubenswrapper[4981]: I0227 19:07:25.714764 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" podUID="fcba892b-5905-40ec-a2cf-14e71aeba8c1" containerName="dnsmasq-dns" containerID="cri-o://d5bea82b461062ec73a068e1c79214fa87c4ce1664bda7fbba0200fe8e05c16a" gracePeriod=10 Feb 27 19:07:25 crc kubenswrapper[4981]: I0227 19:07:25.728794 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" podStartSLOduration=9.728768527 podStartE2EDuration="9.728768527s" podCreationTimestamp="2026-02-27 19:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:07:24.405863087 +0000 UTC m=+1343.884644267" watchObservedRunningTime="2026-02-27 19:07:25.728768527 +0000 UTC m=+1345.207549687" Feb 27 19:07:25 crc kubenswrapper[4981]: I0227 19:07:25.732875 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 27 19:07:25 crc kubenswrapper[4981]: I0227 19:07:25.738009 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-kqzmw" podStartSLOduration=4.737988316 podStartE2EDuration="4.737988316s" podCreationTimestamp="2026-02-27 19:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:07:25.731288503 +0000 UTC m=+1345.210069663" watchObservedRunningTime="2026-02-27 19:07:25.737988316 +0000 UTC m=+1345.216769476" Feb 27 19:07:25 crc kubenswrapper[4981]: I0227 19:07:25.741747 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 27 19:07:25 crc kubenswrapper[4981]: I0227 19:07:25.743700 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-t4kxx" Feb 27 19:07:25 crc kubenswrapper[4981]: I0227 19:07:25.746679 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 27 19:07:25 crc kubenswrapper[4981]: I0227 19:07:25.747765 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 27 19:07:25 crc kubenswrapper[4981]: I0227 19:07:25.747881 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 27 19:07:25 crc kubenswrapper[4981]: I0227 19:07:25.749708 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.267639 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c9c5bb1a-80fb-459f-acb9-e3751c60f684-lock\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.267933 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjg8v\" (UniqueName: \"kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-kube-api-access-hjg8v\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.268134 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.268214 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c9c5bb1a-80fb-459f-acb9-e3751c60f684-cache\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.268341 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c5bb1a-80fb-459f-acb9-e3751c60f684-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.268477 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.369793 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.369883 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c9c5bb1a-80fb-459f-acb9-e3751c60f684-lock\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.369914 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjg8v\" (UniqueName: \"kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-kube-api-access-hjg8v\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.369969 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.370000 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c9c5bb1a-80fb-459f-acb9-e3751c60f684-cache\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.370069 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c5bb1a-80fb-459f-acb9-e3751c60f684-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:26 crc kubenswrapper[4981]: E0227 19:07:26.370183 4981 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 19:07:26 crc kubenswrapper[4981]: E0227 19:07:26.370210 4981 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 19:07:26 crc kubenswrapper[4981]: E0227 19:07:26.370266 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift podName:c9c5bb1a-80fb-459f-acb9-e3751c60f684 nodeName:}" failed. No retries permitted until 2026-02-27 19:07:26.870246223 +0000 UTC m=+1346.349027383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift") pod "swift-storage-0" (UID: "c9c5bb1a-80fb-459f-acb9-e3751c60f684") : configmap "swift-ring-files" not found Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.370650 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/swift-storage-0" Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.370666 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c9c5bb1a-80fb-459f-acb9-e3751c60f684-lock\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.370810 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c9c5bb1a-80fb-459f-acb9-e3751c60f684-cache\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.386410 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c5bb1a-80fb-459f-acb9-e3751c60f684-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.391344 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjg8v\" (UniqueName: \"kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-kube-api-access-hjg8v\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.397081 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.719103 4981 generic.go:334] "Generic (PLEG): container finished" podID="fcba892b-5905-40ec-a2cf-14e71aeba8c1" containerID="d5bea82b461062ec73a068e1c79214fa87c4ce1664bda7fbba0200fe8e05c16a" exitCode=0 Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.719143 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" event={"ID":"fcba892b-5905-40ec-a2cf-14e71aeba8c1","Type":"ContainerDied","Data":"d5bea82b461062ec73a068e1c79214fa87c4ce1664bda7fbba0200fe8e05c16a"} Feb 27 19:07:26 crc kubenswrapper[4981]: I0227 19:07:26.878511 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:26 crc kubenswrapper[4981]: E0227 19:07:26.878763 4981 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 19:07:26 crc kubenswrapper[4981]: E0227 19:07:26.878936 4981 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 19:07:26 crc kubenswrapper[4981]: E0227 19:07:26.879039 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift podName:c9c5bb1a-80fb-459f-acb9-e3751c60f684 nodeName:}" failed. No retries permitted until 2026-02-27 19:07:27.879023439 +0000 UTC m=+1347.357804599 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift") pod "swift-storage-0" (UID: "c9c5bb1a-80fb-459f-acb9-e3751c60f684") : configmap "swift-ring-files" not found Feb 27 19:07:28 crc kubenswrapper[4981]: I0227 19:07:27.912129 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:28 crc kubenswrapper[4981]: E0227 19:07:27.912390 4981 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 19:07:28 crc kubenswrapper[4981]: E0227 19:07:27.912402 4981 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 19:07:28 crc kubenswrapper[4981]: E0227 19:07:27.912444 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift podName:c9c5bb1a-80fb-459f-acb9-e3751c60f684 nodeName:}" failed. No retries permitted until 2026-02-27 19:07:29.912429672 +0000 UTC m=+1349.391210832 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift") pod "swift-storage-0" (UID: "c9c5bb1a-80fb-459f-acb9-e3751c60f684") : configmap "swift-ring-files" not found Feb 27 19:07:28 crc kubenswrapper[4981]: I0227 19:07:28.195272 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-7698fb7476-ljffl" podUID="ded84d09-908f-47fd-b75b-25013113939f" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.67:8081/readyz\": dial tcp 10.217.0.67:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.252147 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.321781 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cff9l"] Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.342598 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-j6c5h"] Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.343584 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-j6c5h"] Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.343668 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.346418 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.346641 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.351239 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.354312 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="918ffa1d-14dc-4215-ad79-e545616bcfc5" containerName="galera" probeResult="failure" output=< Feb 27 19:07:29 crc kubenswrapper[4981]: wsrep_local_state_comment (Joined) differs from Synced Feb 27 19:07:29 crc kubenswrapper[4981]: > Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.484474 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-etc-swift\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.484528 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-swiftconf\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.484561 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-dispersionconf\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.484684 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-combined-ca-bundle\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.484708 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-scripts\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.484725 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-ring-data-devices\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.484924 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndqc2\" (UniqueName: \"kubernetes.io/projected/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-kube-api-access-ndqc2\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.586183 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-combined-ca-bundle\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.586225 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-scripts\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.586248 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-ring-data-devices\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.586314 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndqc2\" (UniqueName: \"kubernetes.io/projected/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-kube-api-access-ndqc2\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.586343 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-etc-swift\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.586361 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-swiftconf\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.586387 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-dispersionconf\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.587206 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-ring-data-devices\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.587569 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-scripts\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.587588 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-etc-swift\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.591194 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-dispersionconf\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.591452 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-combined-ca-bundle\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.591670 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-swiftconf\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.800238 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-k2kn8" podUID="e1c487e5-53af-41ef-8713-87d17ab9632d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 19:07:29 crc kubenswrapper[4981]: I0227 19:07:29.832432 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndqc2\" (UniqueName: \"kubernetes.io/projected/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-kube-api-access-ndqc2\") pod \"swift-ring-rebalance-j6c5h\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.008581 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:07:30 crc kubenswrapper[4981]: E0227 19:07:30.010946 4981 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 19:07:30 crc kubenswrapper[4981]: E0227 19:07:30.010974 4981 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 19:07:30 crc kubenswrapper[4981]: E0227 19:07:30.011016 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift podName:c9c5bb1a-80fb-459f-acb9-e3751c60f684 nodeName:}" failed. No retries permitted until 2026-02-27 19:07:34.01100242 +0000 UTC m=+1353.489783580 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift") pod "swift-storage-0" (UID: "c9c5bb1a-80fb-459f-acb9-e3751c60f684") : configmap "swift-ring-files" not found Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.011380 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.214364 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.549768 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cff9l" event={"ID":"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c","Type":"ContainerStarted","Data":"5ff2c79507c1b289553a0053eee6cf04dd2d0d81cc1e188e59da66521096c81b"} Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.574301 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" event={"ID":"fcba892b-5905-40ec-a2cf-14e71aeba8c1","Type":"ContainerDied","Data":"250a44d25b0e19e5f5d277cc6af95a0d535bdbd51bc6112d66ac3b86b4700c91"} Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.574352 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="250a44d25b0e19e5f5d277cc6af95a0d535bdbd51bc6112d66ac3b86b4700c91" Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.609318 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.610014 4981 generic.go:334] "Generic (PLEG): container finished" podID="9e5852d1-68b9-4547-8cc3-b3170875303f" containerID="889670a03574258a741b7a2e3e7d293f4321e0eee5a13978816a46df965fa41a" exitCode=0 Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.610078 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kqzmw" event={"ID":"9e5852d1-68b9-4547-8cc3-b3170875303f","Type":"ContainerDied","Data":"889670a03574258a741b7a2e3e7d293f4321e0eee5a13978816a46df965fa41a"} Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.631008 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-ovsdbserver-nb\") pod \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\" (UID: \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\") " Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.631133 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k95zj\" (UniqueName: \"kubernetes.io/projected/fcba892b-5905-40ec-a2cf-14e71aeba8c1-kube-api-access-k95zj\") pod \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\" (UID: \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\") " Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.631248 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-ovsdbserver-sb\") pod \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\" (UID: \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\") " Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.631308 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-dns-svc\") pod \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\" (UID: \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\") " Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.631396 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-config\") pod \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\" (UID: \"fcba892b-5905-40ec-a2cf-14e71aeba8c1\") " Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.686939 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcba892b-5905-40ec-a2cf-14e71aeba8c1-kube-api-access-k95zj" (OuterVolumeSpecName: "kube-api-access-k95zj") pod "fcba892b-5905-40ec-a2cf-14e71aeba8c1" (UID: "fcba892b-5905-40ec-a2cf-14e71aeba8c1"). InnerVolumeSpecName "kube-api-access-k95zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.739302 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k95zj\" (UniqueName: \"kubernetes.io/projected/fcba892b-5905-40ec-a2cf-14e71aeba8c1-kube-api-access-k95zj\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.785182 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fcba892b-5905-40ec-a2cf-14e71aeba8c1" (UID: "fcba892b-5905-40ec-a2cf-14e71aeba8c1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.799082 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fcba892b-5905-40ec-a2cf-14e71aeba8c1" (UID: "fcba892b-5905-40ec-a2cf-14e71aeba8c1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.818973 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-config" (OuterVolumeSpecName: "config") pod "fcba892b-5905-40ec-a2cf-14e71aeba8c1" (UID: "fcba892b-5905-40ec-a2cf-14e71aeba8c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.830763 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fcba892b-5905-40ec-a2cf-14e71aeba8c1" (UID: "fcba892b-5905-40ec-a2cf-14e71aeba8c1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.841588 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.841624 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.841640 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.841652 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcba892b-5905-40ec-a2cf-14e71aeba8c1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:30 crc kubenswrapper[4981]: I0227 19:07:30.942672 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-j6c5h"] Feb 27 19:07:30 crc kubenswrapper[4981]: W0227 19:07:30.944367 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcfd5f62_e6b9_4a63_8030_df81c9d7b580.slice/crio-c0d99c1986a219e16631880a870ecb4fbf207ee88710aae7aad2c8d83877dade WatchSource:0}: Error finding container c0d99c1986a219e16631880a870ecb4fbf207ee88710aae7aad2c8d83877dade: Status 404 returned error can't find the container with id c0d99c1986a219e16631880a870ecb4fbf207ee88710aae7aad2c8d83877dade Feb 27 19:07:31 crc kubenswrapper[4981]: I0227 19:07:31.619738 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d57cb309-6812-4de2-a172-8d0896a7d864","Type":"ContainerStarted","Data":"3a07e6641f50563b3e75f3c14deefbb8d5806ace0ae40d2c05789e3e1dfa6b3c"} Feb 27 19:07:31 crc kubenswrapper[4981]: I0227 19:07:31.622461 4981 generic.go:334] "Generic (PLEG): container finished" podID="e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c" containerID="183ebc75576d51ff1bbfa818c38efc374a0b300c82a944294b04c278f2b02249" exitCode=0 Feb 27 19:07:31 crc kubenswrapper[4981]: I0227 19:07:31.622530 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cff9l" event={"ID":"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c","Type":"ContainerDied","Data":"183ebc75576d51ff1bbfa818c38efc374a0b300c82a944294b04c278f2b02249"} Feb 27 19:07:31 crc kubenswrapper[4981]: I0227 19:07:31.623943 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j6c5h" event={"ID":"fcfd5f62-e6b9-4a63-8030-df81c9d7b580","Type":"ContainerStarted","Data":"c0d99c1986a219e16631880a870ecb4fbf207ee88710aae7aad2c8d83877dade"} Feb 27 19:07:31 crc kubenswrapper[4981]: I0227 19:07:31.624080 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-8lsxr" Feb 27 19:07:31 crc kubenswrapper[4981]: I0227 19:07:31.653780 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=10.185122151 podStartE2EDuration="1m16.653761596s" podCreationTimestamp="2026-02-27 19:06:15 +0000 UTC" firstStartedPulling="2026-02-27 19:06:23.479571398 +0000 UTC m=+1282.958352588" lastFinishedPulling="2026-02-27 19:07:29.948210873 +0000 UTC m=+1349.426992033" observedRunningTime="2026-02-27 19:07:31.644999102 +0000 UTC m=+1351.123780302" watchObservedRunningTime="2026-02-27 19:07:31.653761596 +0000 UTC m=+1351.132542756" Feb 27 19:07:31 crc kubenswrapper[4981]: I0227 19:07:31.698710 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 27 19:07:31 crc kubenswrapper[4981]: I0227 19:07:31.698781 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 27 19:07:31 crc kubenswrapper[4981]: I0227 19:07:31.710843 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8lsxr"] Feb 27 19:07:31 crc kubenswrapper[4981]: I0227 19:07:31.718414 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8lsxr"] Feb 27 19:07:32 crc kubenswrapper[4981]: I0227 19:07:32.049270 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kqzmw" Feb 27 19:07:32 crc kubenswrapper[4981]: I0227 19:07:32.061463 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kp9b\" (UniqueName: \"kubernetes.io/projected/9e5852d1-68b9-4547-8cc3-b3170875303f-kube-api-access-7kp9b\") pod \"9e5852d1-68b9-4547-8cc3-b3170875303f\" (UID: \"9e5852d1-68b9-4547-8cc3-b3170875303f\") " Feb 27 19:07:32 crc kubenswrapper[4981]: I0227 19:07:32.061497 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e5852d1-68b9-4547-8cc3-b3170875303f-operator-scripts\") pod \"9e5852d1-68b9-4547-8cc3-b3170875303f\" (UID: \"9e5852d1-68b9-4547-8cc3-b3170875303f\") " Feb 27 19:07:32 crc kubenswrapper[4981]: I0227 19:07:32.064383 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e5852d1-68b9-4547-8cc3-b3170875303f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e5852d1-68b9-4547-8cc3-b3170875303f" (UID: "9e5852d1-68b9-4547-8cc3-b3170875303f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:32 crc kubenswrapper[4981]: I0227 19:07:32.069843 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e5852d1-68b9-4547-8cc3-b3170875303f-kube-api-access-7kp9b" (OuterVolumeSpecName: "kube-api-access-7kp9b") pod "9e5852d1-68b9-4547-8cc3-b3170875303f" (UID: "9e5852d1-68b9-4547-8cc3-b3170875303f"). InnerVolumeSpecName "kube-api-access-7kp9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:07:32 crc kubenswrapper[4981]: I0227 19:07:32.170064 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kp9b\" (UniqueName: \"kubernetes.io/projected/9e5852d1-68b9-4547-8cc3-b3170875303f-kube-api-access-7kp9b\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:32 crc kubenswrapper[4981]: I0227 19:07:32.170107 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e5852d1-68b9-4547-8cc3-b3170875303f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:32 crc kubenswrapper[4981]: I0227 19:07:32.632483 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kqzmw" Feb 27 19:07:32 crc kubenswrapper[4981]: I0227 19:07:32.632481 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kqzmw" event={"ID":"9e5852d1-68b9-4547-8cc3-b3170875303f","Type":"ContainerDied","Data":"9822b357e6dc4667b7b48452f352443b048fa8ba5a7eefdc17f22c60fd37c85b"} Feb 27 19:07:32 crc kubenswrapper[4981]: I0227 19:07:32.632535 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9822b357e6dc4667b7b48452f352443b048fa8ba5a7eefdc17f22c60fd37c85b" Feb 27 19:07:32 crc kubenswrapper[4981]: I0227 19:07:32.973820 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-v2rmf"] Feb 27 19:07:32 crc kubenswrapper[4981]: E0227 19:07:32.974351 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcba892b-5905-40ec-a2cf-14e71aeba8c1" containerName="dnsmasq-dns" Feb 27 19:07:32 crc kubenswrapper[4981]: I0227 19:07:32.974363 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcba892b-5905-40ec-a2cf-14e71aeba8c1" containerName="dnsmasq-dns" Feb 27 19:07:32 crc kubenswrapper[4981]: E0227 19:07:32.974399 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e5852d1-68b9-4547-8cc3-b3170875303f" containerName="mariadb-account-create-update" Feb 27 19:07:32 crc kubenswrapper[4981]: I0227 19:07:32.974406 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e5852d1-68b9-4547-8cc3-b3170875303f" containerName="mariadb-account-create-update" Feb 27 19:07:32 crc kubenswrapper[4981]: E0227 19:07:32.974426 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcba892b-5905-40ec-a2cf-14e71aeba8c1" containerName="init" Feb 27 19:07:32 crc kubenswrapper[4981]: I0227 19:07:32.974433 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcba892b-5905-40ec-a2cf-14e71aeba8c1" containerName="init" Feb 27 19:07:32 crc kubenswrapper[4981]: I0227 19:07:32.974607 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e5852d1-68b9-4547-8cc3-b3170875303f" containerName="mariadb-account-create-update" Feb 27 19:07:32 crc kubenswrapper[4981]: I0227 19:07:32.974626 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcba892b-5905-40ec-a2cf-14e71aeba8c1" containerName="dnsmasq-dns" Feb 27 19:07:32 crc kubenswrapper[4981]: I0227 19:07:32.975178 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-v2rmf" Feb 27 19:07:32 crc kubenswrapper[4981]: I0227 19:07:32.989851 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-v2rmf"] Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.085701 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2b17-account-create-update-dhdlp"] Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.086187 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fae1f9f4-459b-4894-ba4c-db79218e7fb0-operator-scripts\") pod \"keystone-db-create-v2rmf\" (UID: \"fae1f9f4-459b-4894-ba4c-db79218e7fb0\") " pod="openstack/keystone-db-create-v2rmf" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.086506 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwbn2\" (UniqueName: \"kubernetes.io/projected/fae1f9f4-459b-4894-ba4c-db79218e7fb0-kube-api-access-vwbn2\") pod \"keystone-db-create-v2rmf\" (UID: \"fae1f9f4-459b-4894-ba4c-db79218e7fb0\") " pod="openstack/keystone-db-create-v2rmf" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.086809 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2b17-account-create-update-dhdlp" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.089590 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.094238 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2b17-account-create-update-dhdlp"] Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.171391 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-kk9rm"] Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.175636 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kk9rm" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.187936 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dec7952-ffcd-45f1-b788-669b9a76f577-operator-scripts\") pod \"keystone-2b17-account-create-update-dhdlp\" (UID: \"0dec7952-ffcd-45f1-b788-669b9a76f577\") " pod="openstack/keystone-2b17-account-create-update-dhdlp" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.188297 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzdb8\" (UniqueName: \"kubernetes.io/projected/0dec7952-ffcd-45f1-b788-669b9a76f577-kube-api-access-gzdb8\") pod \"keystone-2b17-account-create-update-dhdlp\" (UID: \"0dec7952-ffcd-45f1-b788-669b9a76f577\") " pod="openstack/keystone-2b17-account-create-update-dhdlp" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.188498 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fae1f9f4-459b-4894-ba4c-db79218e7fb0-operator-scripts\") pod \"keystone-db-create-v2rmf\" (UID: \"fae1f9f4-459b-4894-ba4c-db79218e7fb0\") " pod="openstack/keystone-db-create-v2rmf" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.188769 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwbn2\" (UniqueName: \"kubernetes.io/projected/fae1f9f4-459b-4894-ba4c-db79218e7fb0-kube-api-access-vwbn2\") pod \"keystone-db-create-v2rmf\" (UID: \"fae1f9f4-459b-4894-ba4c-db79218e7fb0\") " pod="openstack/keystone-db-create-v2rmf" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.189244 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fae1f9f4-459b-4894-ba4c-db79218e7fb0-operator-scripts\") pod \"keystone-db-create-v2rmf\" (UID: \"fae1f9f4-459b-4894-ba4c-db79218e7fb0\") " pod="openstack/keystone-db-create-v2rmf" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.201731 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-kk9rm"] Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.237737 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwbn2\" (UniqueName: \"kubernetes.io/projected/fae1f9f4-459b-4894-ba4c-db79218e7fb0-kube-api-access-vwbn2\") pod \"keystone-db-create-v2rmf\" (UID: \"fae1f9f4-459b-4894-ba4c-db79218e7fb0\") " pod="openstack/keystone-db-create-v2rmf" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.258047 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9966-account-create-update-6d4f6"] Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.259111 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9966-account-create-update-6d4f6" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.261818 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.273173 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9966-account-create-update-6d4f6"] Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.289906 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hqnp\" (UniqueName: \"kubernetes.io/projected/a5088f9d-7a73-4e90-bb3e-b66bc16b840f-kube-api-access-6hqnp\") pod \"placement-db-create-kk9rm\" (UID: \"a5088f9d-7a73-4e90-bb3e-b66bc16b840f\") " pod="openstack/placement-db-create-kk9rm" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.289966 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5088f9d-7a73-4e90-bb3e-b66bc16b840f-operator-scripts\") pod \"placement-db-create-kk9rm\" (UID: \"a5088f9d-7a73-4e90-bb3e-b66bc16b840f\") " pod="openstack/placement-db-create-kk9rm" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.289996 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dec7952-ffcd-45f1-b788-669b9a76f577-operator-scripts\") pod \"keystone-2b17-account-create-update-dhdlp\" (UID: \"0dec7952-ffcd-45f1-b788-669b9a76f577\") " pod="openstack/keystone-2b17-account-create-update-dhdlp" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.290222 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzdb8\" (UniqueName: \"kubernetes.io/projected/0dec7952-ffcd-45f1-b788-669b9a76f577-kube-api-access-gzdb8\") pod \"keystone-2b17-account-create-update-dhdlp\" (UID: \"0dec7952-ffcd-45f1-b788-669b9a76f577\") " pod="openstack/keystone-2b17-account-create-update-dhdlp" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.290615 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dec7952-ffcd-45f1-b788-669b9a76f577-operator-scripts\") pod \"keystone-2b17-account-create-update-dhdlp\" (UID: \"0dec7952-ffcd-45f1-b788-669b9a76f577\") " pod="openstack/keystone-2b17-account-create-update-dhdlp" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.306696 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzdb8\" (UniqueName: \"kubernetes.io/projected/0dec7952-ffcd-45f1-b788-669b9a76f577-kube-api-access-gzdb8\") pod \"keystone-2b17-account-create-update-dhdlp\" (UID: \"0dec7952-ffcd-45f1-b788-669b9a76f577\") " pod="openstack/keystone-2b17-account-create-update-dhdlp" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.330555 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-v2rmf" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.391353 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hqnp\" (UniqueName: \"kubernetes.io/projected/a5088f9d-7a73-4e90-bb3e-b66bc16b840f-kube-api-access-6hqnp\") pod \"placement-db-create-kk9rm\" (UID: \"a5088f9d-7a73-4e90-bb3e-b66bc16b840f\") " pod="openstack/placement-db-create-kk9rm" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.391428 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5088f9d-7a73-4e90-bb3e-b66bc16b840f-operator-scripts\") pod \"placement-db-create-kk9rm\" (UID: \"a5088f9d-7a73-4e90-bb3e-b66bc16b840f\") " pod="openstack/placement-db-create-kk9rm" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.391477 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3edce3d3-96e5-4fbe-8ef7-ba2d01d06025-operator-scripts\") pod \"placement-9966-account-create-update-6d4f6\" (UID: \"3edce3d3-96e5-4fbe-8ef7-ba2d01d06025\") " pod="openstack/placement-9966-account-create-update-6d4f6" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.391621 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsp2z\" (UniqueName: \"kubernetes.io/projected/3edce3d3-96e5-4fbe-8ef7-ba2d01d06025-kube-api-access-qsp2z\") pod \"placement-9966-account-create-update-6d4f6\" (UID: \"3edce3d3-96e5-4fbe-8ef7-ba2d01d06025\") " pod="openstack/placement-9966-account-create-update-6d4f6" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.392452 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5088f9d-7a73-4e90-bb3e-b66bc16b840f-operator-scripts\") pod \"placement-db-create-kk9rm\" (UID: \"a5088f9d-7a73-4e90-bb3e-b66bc16b840f\") " pod="openstack/placement-db-create-kk9rm" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.408508 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2b17-account-create-update-dhdlp" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.430200 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hqnp\" (UniqueName: \"kubernetes.io/projected/a5088f9d-7a73-4e90-bb3e-b66bc16b840f-kube-api-access-6hqnp\") pod \"placement-db-create-kk9rm\" (UID: \"a5088f9d-7a73-4e90-bb3e-b66bc16b840f\") " pod="openstack/placement-db-create-kk9rm" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.493645 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3edce3d3-96e5-4fbe-8ef7-ba2d01d06025-operator-scripts\") pod \"placement-9966-account-create-update-6d4f6\" (UID: \"3edce3d3-96e5-4fbe-8ef7-ba2d01d06025\") " pod="openstack/placement-9966-account-create-update-6d4f6" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.493990 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsp2z\" (UniqueName: \"kubernetes.io/projected/3edce3d3-96e5-4fbe-8ef7-ba2d01d06025-kube-api-access-qsp2z\") pod \"placement-9966-account-create-update-6d4f6\" (UID: \"3edce3d3-96e5-4fbe-8ef7-ba2d01d06025\") " pod="openstack/placement-9966-account-create-update-6d4f6" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.494812 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3edce3d3-96e5-4fbe-8ef7-ba2d01d06025-operator-scripts\") pod \"placement-9966-account-create-update-6d4f6\" (UID: \"3edce3d3-96e5-4fbe-8ef7-ba2d01d06025\") " pod="openstack/placement-9966-account-create-update-6d4f6" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.498861 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kk9rm" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.514666 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsp2z\" (UniqueName: \"kubernetes.io/projected/3edce3d3-96e5-4fbe-8ef7-ba2d01d06025-kube-api-access-qsp2z\") pod \"placement-9966-account-create-update-6d4f6\" (UID: \"3edce3d3-96e5-4fbe-8ef7-ba2d01d06025\") " pod="openstack/placement-9966-account-create-update-6d4f6" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.582285 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9966-account-create-update-6d4f6" Feb 27 19:07:33 crc kubenswrapper[4981]: I0227 19:07:33.640189 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcba892b-5905-40ec-a2cf-14e71aeba8c1" path="/var/lib/kubelet/pods/fcba892b-5905-40ec-a2cf-14e71aeba8c1/volumes" Feb 27 19:07:34 crc kubenswrapper[4981]: I0227 19:07:34.103871 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:34 crc kubenswrapper[4981]: E0227 19:07:34.104206 4981 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 19:07:34 crc kubenswrapper[4981]: E0227 19:07:34.104253 4981 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 19:07:34 crc kubenswrapper[4981]: E0227 19:07:34.104342 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift podName:c9c5bb1a-80fb-459f-acb9-e3751c60f684 nodeName:}" failed. No retries permitted until 2026-02-27 19:07:42.104311163 +0000 UTC m=+1361.583092323 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift") pod "swift-storage-0" (UID: "c9c5bb1a-80fb-459f-acb9-e3751c60f684") : configmap "swift-ring-files" not found Feb 27 19:07:34 crc kubenswrapper[4981]: I0227 19:07:34.744202 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 27 19:07:35 crc kubenswrapper[4981]: I0227 19:07:35.657697 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9966-account-create-update-6d4f6"] Feb 27 19:07:35 crc kubenswrapper[4981]: I0227 19:07:35.665068 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cff9l" event={"ID":"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c","Type":"ContainerStarted","Data":"0bd7dd69404c0f054ec7614f54e37bb8d573b13b49e99519637e359c83731f43"} Feb 27 19:07:35 crc kubenswrapper[4981]: I0227 19:07:35.665906 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:07:35 crc kubenswrapper[4981]: I0227 19:07:35.736828 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-cff9l" podStartSLOduration=11.736757248 podStartE2EDuration="11.736757248s" podCreationTimestamp="2026-02-27 19:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:07:35.688414196 +0000 UTC m=+1355.167195386" watchObservedRunningTime="2026-02-27 19:07:35.736757248 +0000 UTC m=+1355.215538418" Feb 27 19:07:35 crc kubenswrapper[4981]: I0227 19:07:35.746150 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-kk9rm"] Feb 27 19:07:35 crc kubenswrapper[4981]: I0227 19:07:35.765476 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2b17-account-create-update-dhdlp"] Feb 27 19:07:35 crc kubenswrapper[4981]: I0227 19:07:35.887344 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-v2rmf"] Feb 27 19:07:35 crc kubenswrapper[4981]: W0227 19:07:35.969871 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfae1f9f4_459b_4894_ba4c_db79218e7fb0.slice/crio-f62be59c65c9470a39064abdefccaa5600c16442f678e9e6169577910b519cb4 WatchSource:0}: Error finding container f62be59c65c9470a39064abdefccaa5600c16442f678e9e6169577910b519cb4: Status 404 returned error can't find the container with id f62be59c65c9470a39064abdefccaa5600c16442f678e9e6169577910b519cb4 Feb 27 19:07:36 crc kubenswrapper[4981]: I0227 19:07:36.675257 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2b17-account-create-update-dhdlp" event={"ID":"0dec7952-ffcd-45f1-b788-669b9a76f577","Type":"ContainerStarted","Data":"39d5ad09342b70ef57075489904198cdcbe2e48fc3cdd2dd68474762052581d9"} Feb 27 19:07:36 crc kubenswrapper[4981]: I0227 19:07:36.675310 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2b17-account-create-update-dhdlp" event={"ID":"0dec7952-ffcd-45f1-b788-669b9a76f577","Type":"ContainerStarted","Data":"acf24a8a9904ab31fc2fe7792c70e15aed5b37c9f0b2fd9ebd55c66877555484"} Feb 27 19:07:36 crc kubenswrapper[4981]: I0227 19:07:36.677150 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9966-account-create-update-6d4f6" event={"ID":"3edce3d3-96e5-4fbe-8ef7-ba2d01d06025","Type":"ContainerStarted","Data":"2eaac1853ad432d3fb764761e37438f6ab0922f0575edfce5b4212dcef893563"} Feb 27 19:07:36 crc kubenswrapper[4981]: I0227 19:07:36.677178 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9966-account-create-update-6d4f6" event={"ID":"3edce3d3-96e5-4fbe-8ef7-ba2d01d06025","Type":"ContainerStarted","Data":"9aa5c9467a35ac915af0a6eaecb8db5187b1061079e5b2acb4fb8f042a82dcb4"} Feb 27 19:07:36 crc kubenswrapper[4981]: I0227 19:07:36.679835 4981 generic.go:334] "Generic (PLEG): container finished" podID="fae1f9f4-459b-4894-ba4c-db79218e7fb0" containerID="9cf48594755d9db8e2e66fd22d01a7734be5676945d918f8133a713b2764bb34" exitCode=0 Feb 27 19:07:36 crc kubenswrapper[4981]: I0227 19:07:36.679962 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-v2rmf" event={"ID":"fae1f9f4-459b-4894-ba4c-db79218e7fb0","Type":"ContainerDied","Data":"9cf48594755d9db8e2e66fd22d01a7734be5676945d918f8133a713b2764bb34"} Feb 27 19:07:36 crc kubenswrapper[4981]: I0227 19:07:36.679989 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-v2rmf" event={"ID":"fae1f9f4-459b-4894-ba4c-db79218e7fb0","Type":"ContainerStarted","Data":"f62be59c65c9470a39064abdefccaa5600c16442f678e9e6169577910b519cb4"} Feb 27 19:07:36 crc kubenswrapper[4981]: I0227 19:07:36.683166 4981 generic.go:334] "Generic (PLEG): container finished" podID="a5088f9d-7a73-4e90-bb3e-b66bc16b840f" containerID="d6dcd57121655160a41e0b97b4fffb64705b3dfd74fd3be4461c8485a6ada855" exitCode=0 Feb 27 19:07:36 crc kubenswrapper[4981]: I0227 19:07:36.684035 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kk9rm" event={"ID":"a5088f9d-7a73-4e90-bb3e-b66bc16b840f","Type":"ContainerDied","Data":"d6dcd57121655160a41e0b97b4fffb64705b3dfd74fd3be4461c8485a6ada855"} Feb 27 19:07:36 crc kubenswrapper[4981]: I0227 19:07:36.684081 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kk9rm" event={"ID":"a5088f9d-7a73-4e90-bb3e-b66bc16b840f","Type":"ContainerStarted","Data":"6b4e5502ef289fa593860760edf7a04544c9a7ed558504a3360a8cd42b86ab07"} Feb 27 19:07:36 crc kubenswrapper[4981]: I0227 19:07:36.692431 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-2b17-account-create-update-dhdlp" podStartSLOduration=3.6924158990000002 podStartE2EDuration="3.692415899s" podCreationTimestamp="2026-02-27 19:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:07:36.690243083 +0000 UTC m=+1356.169024253" watchObservedRunningTime="2026-02-27 19:07:36.692415899 +0000 UTC m=+1356.171197059" Feb 27 19:07:36 crc kubenswrapper[4981]: I0227 19:07:36.712566 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-9966-account-create-update-6d4f6" podStartSLOduration=3.712551437 podStartE2EDuration="3.712551437s" podCreationTimestamp="2026-02-27 19:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:07:36.708513665 +0000 UTC m=+1356.187294855" watchObservedRunningTime="2026-02-27 19:07:36.712551437 +0000 UTC m=+1356.191332597" Feb 27 19:07:36 crc kubenswrapper[4981]: I0227 19:07:36.767954 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 27 19:07:36 crc kubenswrapper[4981]: I0227 19:07:36.955409 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 27 19:07:36 crc kubenswrapper[4981]: I0227 19:07:36.958113 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 27 19:07:36 crc kubenswrapper[4981]: I0227 19:07:36.960105 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-8l55q" Feb 27 19:07:36 crc kubenswrapper[4981]: I0227 19:07:36.960280 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 27 19:07:36 crc kubenswrapper[4981]: I0227 19:07:36.960444 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 27 19:07:36 crc kubenswrapper[4981]: I0227 19:07:36.962038 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 27 19:07:36 crc kubenswrapper[4981]: I0227 19:07:36.972799 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.060936 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d923459f-90f4-4399-80a0-4e22daa1eadf-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.061050 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d923459f-90f4-4399-80a0-4e22daa1eadf-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.061125 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d923459f-90f4-4399-80a0-4e22daa1eadf-config\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.061151 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgxvs\" (UniqueName: \"kubernetes.io/projected/d923459f-90f4-4399-80a0-4e22daa1eadf-kube-api-access-xgxvs\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.061165 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d923459f-90f4-4399-80a0-4e22daa1eadf-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.061200 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d923459f-90f4-4399-80a0-4e22daa1eadf-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.061248 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d923459f-90f4-4399-80a0-4e22daa1eadf-scripts\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.162613 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d923459f-90f4-4399-80a0-4e22daa1eadf-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.162660 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d923459f-90f4-4399-80a0-4e22daa1eadf-config\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.162688 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgxvs\" (UniqueName: \"kubernetes.io/projected/d923459f-90f4-4399-80a0-4e22daa1eadf-kube-api-access-xgxvs\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.162706 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d923459f-90f4-4399-80a0-4e22daa1eadf-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.162749 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d923459f-90f4-4399-80a0-4e22daa1eadf-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.162804 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d923459f-90f4-4399-80a0-4e22daa1eadf-scripts\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.162825 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d923459f-90f4-4399-80a0-4e22daa1eadf-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.163285 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d923459f-90f4-4399-80a0-4e22daa1eadf-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.164383 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d923459f-90f4-4399-80a0-4e22daa1eadf-scripts\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.164452 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d923459f-90f4-4399-80a0-4e22daa1eadf-config\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.169848 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d923459f-90f4-4399-80a0-4e22daa1eadf-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.177119 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d923459f-90f4-4399-80a0-4e22daa1eadf-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.177142 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d923459f-90f4-4399-80a0-4e22daa1eadf-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.181070 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgxvs\" (UniqueName: \"kubernetes.io/projected/d923459f-90f4-4399-80a0-4e22daa1eadf-kube-api-access-xgxvs\") pod \"ovn-northd-0\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.329397 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.692590 4981 generic.go:334] "Generic (PLEG): container finished" podID="0dec7952-ffcd-45f1-b788-669b9a76f577" containerID="39d5ad09342b70ef57075489904198cdcbe2e48fc3cdd2dd68474762052581d9" exitCode=0 Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.692648 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2b17-account-create-update-dhdlp" event={"ID":"0dec7952-ffcd-45f1-b788-669b9a76f577","Type":"ContainerDied","Data":"39d5ad09342b70ef57075489904198cdcbe2e48fc3cdd2dd68474762052581d9"} Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.697154 4981 generic.go:334] "Generic (PLEG): container finished" podID="3edce3d3-96e5-4fbe-8ef7-ba2d01d06025" containerID="2eaac1853ad432d3fb764761e37438f6ab0922f0575edfce5b4212dcef893563" exitCode=0 Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.697232 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9966-account-create-update-6d4f6" event={"ID":"3edce3d3-96e5-4fbe-8ef7-ba2d01d06025","Type":"ContainerDied","Data":"2eaac1853ad432d3fb764761e37438f6ab0922f0575edfce5b4212dcef893563"} Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.698554 4981 generic.go:334] "Generic (PLEG): container finished" podID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" containerID="739bf9a3ac6ea571ec9bca214592f39bd326faa6890041f869de8b117f30fc3d" exitCode=0 Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.698610 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3","Type":"ContainerDied","Data":"739bf9a3ac6ea571ec9bca214592f39bd326faa6890041f869de8b117f30fc3d"} Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.701472 4981 generic.go:334] "Generic (PLEG): container finished" podID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" containerID="d378fa26bc8f0c5f0f946f4dfecf68788a807fe6b3c792500d882ea0dd773eb9" exitCode=0 Feb 27 19:07:37 crc kubenswrapper[4981]: I0227 19:07:37.702320 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f928877c-eaff-4ab4-ae3b-ba6ed721642c","Type":"ContainerDied","Data":"d378fa26bc8f0c5f0f946f4dfecf68788a807fe6b3c792500d882ea0dd773eb9"} Feb 27 19:07:38 crc kubenswrapper[4981]: I0227 19:07:38.167737 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-n5d2t" podUID="214d65cb-9030-4093-853c-c1485fc1a30a" containerName="ovn-controller" probeResult="failure" output=< Feb 27 19:07:38 crc kubenswrapper[4981]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 27 19:07:38 crc kubenswrapper[4981]: > Feb 27 19:07:38 crc kubenswrapper[4981]: I0227 19:07:38.187819 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:07:39 crc kubenswrapper[4981]: I0227 19:07:39.848555 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-zxxsl"] Feb 27 19:07:39 crc kubenswrapper[4981]: I0227 19:07:39.850505 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-zxxsl" Feb 27 19:07:39 crc kubenswrapper[4981]: I0227 19:07:39.865369 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e72a-account-create-update-p99q6"] Feb 27 19:07:39 crc kubenswrapper[4981]: I0227 19:07:39.867012 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e72a-account-create-update-p99q6" Feb 27 19:07:39 crc kubenswrapper[4981]: I0227 19:07:39.870452 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 27 19:07:39 crc kubenswrapper[4981]: I0227 19:07:39.878036 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-zxxsl"] Feb 27 19:07:39 crc kubenswrapper[4981]: I0227 19:07:39.885288 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e72a-account-create-update-p99q6"] Feb 27 19:07:39 crc kubenswrapper[4981]: I0227 19:07:39.917035 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40cbb61c-1aa8-4477-8579-76699afce28b-operator-scripts\") pod \"glance-db-create-zxxsl\" (UID: \"40cbb61c-1aa8-4477-8579-76699afce28b\") " pod="openstack/glance-db-create-zxxsl" Feb 27 19:07:39 crc kubenswrapper[4981]: I0227 19:07:39.917169 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkb7b\" (UniqueName: \"kubernetes.io/projected/40cbb61c-1aa8-4477-8579-76699afce28b-kube-api-access-lkb7b\") pod \"glance-db-create-zxxsl\" (UID: \"40cbb61c-1aa8-4477-8579-76699afce28b\") " pod="openstack/glance-db-create-zxxsl" Feb 27 19:07:39 crc kubenswrapper[4981]: I0227 19:07:39.944498 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kqzmw"] Feb 27 19:07:39 crc kubenswrapper[4981]: I0227 19:07:39.953175 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-kqzmw"] Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.018450 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40cbb61c-1aa8-4477-8579-76699afce28b-operator-scripts\") pod \"glance-db-create-zxxsl\" (UID: \"40cbb61c-1aa8-4477-8579-76699afce28b\") " pod="openstack/glance-db-create-zxxsl" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.018527 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bxd5\" (UniqueName: \"kubernetes.io/projected/f541ace2-11c0-4232-b34b-d7079bfc597b-kube-api-access-5bxd5\") pod \"glance-e72a-account-create-update-p99q6\" (UID: \"f541ace2-11c0-4232-b34b-d7079bfc597b\") " pod="openstack/glance-e72a-account-create-update-p99q6" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.018552 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkb7b\" (UniqueName: \"kubernetes.io/projected/40cbb61c-1aa8-4477-8579-76699afce28b-kube-api-access-lkb7b\") pod \"glance-db-create-zxxsl\" (UID: \"40cbb61c-1aa8-4477-8579-76699afce28b\") " pod="openstack/glance-db-create-zxxsl" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.018574 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f541ace2-11c0-4232-b34b-d7079bfc597b-operator-scripts\") pod \"glance-e72a-account-create-update-p99q6\" (UID: \"f541ace2-11c0-4232-b34b-d7079bfc597b\") " pod="openstack/glance-e72a-account-create-update-p99q6" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.020115 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40cbb61c-1aa8-4477-8579-76699afce28b-operator-scripts\") pod \"glance-db-create-zxxsl\" (UID: \"40cbb61c-1aa8-4477-8579-76699afce28b\") " pod="openstack/glance-db-create-zxxsl" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.031299 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-flk8g"] Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.033079 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-flk8g" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.035494 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.041852 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkb7b\" (UniqueName: \"kubernetes.io/projected/40cbb61c-1aa8-4477-8579-76699afce28b-kube-api-access-lkb7b\") pod \"glance-db-create-zxxsl\" (UID: \"40cbb61c-1aa8-4477-8579-76699afce28b\") " pod="openstack/glance-db-create-zxxsl" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.046259 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-flk8g"] Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.120551 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f541ace2-11c0-4232-b34b-d7079bfc597b-operator-scripts\") pod \"glance-e72a-account-create-update-p99q6\" (UID: \"f541ace2-11c0-4232-b34b-d7079bfc597b\") " pod="openstack/glance-e72a-account-create-update-p99q6" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.120708 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg92k\" (UniqueName: \"kubernetes.io/projected/619b6b33-ff2d-4b2c-984e-f1c65bfcdffd-kube-api-access-zg92k\") pod \"root-account-create-update-flk8g\" (UID: \"619b6b33-ff2d-4b2c-984e-f1c65bfcdffd\") " pod="openstack/root-account-create-update-flk8g" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.120775 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/619b6b33-ff2d-4b2c-984e-f1c65bfcdffd-operator-scripts\") pod \"root-account-create-update-flk8g\" (UID: \"619b6b33-ff2d-4b2c-984e-f1c65bfcdffd\") " pod="openstack/root-account-create-update-flk8g" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.120828 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bxd5\" (UniqueName: \"kubernetes.io/projected/f541ace2-11c0-4232-b34b-d7079bfc597b-kube-api-access-5bxd5\") pod \"glance-e72a-account-create-update-p99q6\" (UID: \"f541ace2-11c0-4232-b34b-d7079bfc597b\") " pod="openstack/glance-e72a-account-create-update-p99q6" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.121545 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f541ace2-11c0-4232-b34b-d7079bfc597b-operator-scripts\") pod \"glance-e72a-account-create-update-p99q6\" (UID: \"f541ace2-11c0-4232-b34b-d7079bfc597b\") " pod="openstack/glance-e72a-account-create-update-p99q6" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.140025 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bxd5\" (UniqueName: \"kubernetes.io/projected/f541ace2-11c0-4232-b34b-d7079bfc597b-kube-api-access-5bxd5\") pod \"glance-e72a-account-create-update-p99q6\" (UID: \"f541ace2-11c0-4232-b34b-d7079bfc597b\") " pod="openstack/glance-e72a-account-create-update-p99q6" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.201585 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-zxxsl" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.212245 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e72a-account-create-update-p99q6" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.222069 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg92k\" (UniqueName: \"kubernetes.io/projected/619b6b33-ff2d-4b2c-984e-f1c65bfcdffd-kube-api-access-zg92k\") pod \"root-account-create-update-flk8g\" (UID: \"619b6b33-ff2d-4b2c-984e-f1c65bfcdffd\") " pod="openstack/root-account-create-update-flk8g" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.222159 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/619b6b33-ff2d-4b2c-984e-f1c65bfcdffd-operator-scripts\") pod \"root-account-create-update-flk8g\" (UID: \"619b6b33-ff2d-4b2c-984e-f1c65bfcdffd\") " pod="openstack/root-account-create-update-flk8g" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.223243 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/619b6b33-ff2d-4b2c-984e-f1c65bfcdffd-operator-scripts\") pod \"root-account-create-update-flk8g\" (UID: \"619b6b33-ff2d-4b2c-984e-f1c65bfcdffd\") " pod="openstack/root-account-create-update-flk8g" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.243799 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg92k\" (UniqueName: \"kubernetes.io/projected/619b6b33-ff2d-4b2c-984e-f1c65bfcdffd-kube-api-access-zg92k\") pod \"root-account-create-update-flk8g\" (UID: \"619b6b33-ff2d-4b2c-984e-f1c65bfcdffd\") " pod="openstack/root-account-create-update-flk8g" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.374954 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kk9rm" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.387883 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-flk8g" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.444923 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-v2rmf" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.452675 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2b17-account-create-update-dhdlp" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.471445 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9966-account-create-update-6d4f6" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.527448 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fae1f9f4-459b-4894-ba4c-db79218e7fb0-operator-scripts\") pod \"fae1f9f4-459b-4894-ba4c-db79218e7fb0\" (UID: \"fae1f9f4-459b-4894-ba4c-db79218e7fb0\") " Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.527532 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwbn2\" (UniqueName: \"kubernetes.io/projected/fae1f9f4-459b-4894-ba4c-db79218e7fb0-kube-api-access-vwbn2\") pod \"fae1f9f4-459b-4894-ba4c-db79218e7fb0\" (UID: \"fae1f9f4-459b-4894-ba4c-db79218e7fb0\") " Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.527590 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dec7952-ffcd-45f1-b788-669b9a76f577-operator-scripts\") pod \"0dec7952-ffcd-45f1-b788-669b9a76f577\" (UID: \"0dec7952-ffcd-45f1-b788-669b9a76f577\") " Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.528395 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5088f9d-7a73-4e90-bb3e-b66bc16b840f-operator-scripts\") pod \"a5088f9d-7a73-4e90-bb3e-b66bc16b840f\" (UID: \"a5088f9d-7a73-4e90-bb3e-b66bc16b840f\") " Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.528441 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fae1f9f4-459b-4894-ba4c-db79218e7fb0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fae1f9f4-459b-4894-ba4c-db79218e7fb0" (UID: "fae1f9f4-459b-4894-ba4c-db79218e7fb0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.528541 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzdb8\" (UniqueName: \"kubernetes.io/projected/0dec7952-ffcd-45f1-b788-669b9a76f577-kube-api-access-gzdb8\") pod \"0dec7952-ffcd-45f1-b788-669b9a76f577\" (UID: \"0dec7952-ffcd-45f1-b788-669b9a76f577\") " Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.528576 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hqnp\" (UniqueName: \"kubernetes.io/projected/a5088f9d-7a73-4e90-bb3e-b66bc16b840f-kube-api-access-6hqnp\") pod \"a5088f9d-7a73-4e90-bb3e-b66bc16b840f\" (UID: \"a5088f9d-7a73-4e90-bb3e-b66bc16b840f\") " Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.528763 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dec7952-ffcd-45f1-b788-669b9a76f577-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0dec7952-ffcd-45f1-b788-669b9a76f577" (UID: "0dec7952-ffcd-45f1-b788-669b9a76f577"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.529022 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5088f9d-7a73-4e90-bb3e-b66bc16b840f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a5088f9d-7a73-4e90-bb3e-b66bc16b840f" (UID: "a5088f9d-7a73-4e90-bb3e-b66bc16b840f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.529365 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fae1f9f4-459b-4894-ba4c-db79218e7fb0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.529390 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dec7952-ffcd-45f1-b788-669b9a76f577-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.529406 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5088f9d-7a73-4e90-bb3e-b66bc16b840f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.532682 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dec7952-ffcd-45f1-b788-669b9a76f577-kube-api-access-gzdb8" (OuterVolumeSpecName: "kube-api-access-gzdb8") pod "0dec7952-ffcd-45f1-b788-669b9a76f577" (UID: "0dec7952-ffcd-45f1-b788-669b9a76f577"). InnerVolumeSpecName "kube-api-access-gzdb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.537298 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5088f9d-7a73-4e90-bb3e-b66bc16b840f-kube-api-access-6hqnp" (OuterVolumeSpecName: "kube-api-access-6hqnp") pod "a5088f9d-7a73-4e90-bb3e-b66bc16b840f" (UID: "a5088f9d-7a73-4e90-bb3e-b66bc16b840f"). InnerVolumeSpecName "kube-api-access-6hqnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.537418 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fae1f9f4-459b-4894-ba4c-db79218e7fb0-kube-api-access-vwbn2" (OuterVolumeSpecName: "kube-api-access-vwbn2") pod "fae1f9f4-459b-4894-ba4c-db79218e7fb0" (UID: "fae1f9f4-459b-4894-ba4c-db79218e7fb0"). InnerVolumeSpecName "kube-api-access-vwbn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.630381 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3edce3d3-96e5-4fbe-8ef7-ba2d01d06025-operator-scripts\") pod \"3edce3d3-96e5-4fbe-8ef7-ba2d01d06025\" (UID: \"3edce3d3-96e5-4fbe-8ef7-ba2d01d06025\") " Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.630781 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3edce3d3-96e5-4fbe-8ef7-ba2d01d06025-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3edce3d3-96e5-4fbe-8ef7-ba2d01d06025" (UID: "3edce3d3-96e5-4fbe-8ef7-ba2d01d06025"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.630837 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsp2z\" (UniqueName: \"kubernetes.io/projected/3edce3d3-96e5-4fbe-8ef7-ba2d01d06025-kube-api-access-qsp2z\") pod \"3edce3d3-96e5-4fbe-8ef7-ba2d01d06025\" (UID: \"3edce3d3-96e5-4fbe-8ef7-ba2d01d06025\") " Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.632344 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzdb8\" (UniqueName: \"kubernetes.io/projected/0dec7952-ffcd-45f1-b788-669b9a76f577-kube-api-access-gzdb8\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.632360 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hqnp\" (UniqueName: \"kubernetes.io/projected/a5088f9d-7a73-4e90-bb3e-b66bc16b840f-kube-api-access-6hqnp\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.632371 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3edce3d3-96e5-4fbe-8ef7-ba2d01d06025-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.632388 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwbn2\" (UniqueName: \"kubernetes.io/projected/fae1f9f4-459b-4894-ba4c-db79218e7fb0-kube-api-access-vwbn2\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.638744 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3edce3d3-96e5-4fbe-8ef7-ba2d01d06025-kube-api-access-qsp2z" (OuterVolumeSpecName: "kube-api-access-qsp2z") pod "3edce3d3-96e5-4fbe-8ef7-ba2d01d06025" (UID: "3edce3d3-96e5-4fbe-8ef7-ba2d01d06025"). InnerVolumeSpecName "kube-api-access-qsp2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.727913 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kk9rm" event={"ID":"a5088f9d-7a73-4e90-bb3e-b66bc16b840f","Type":"ContainerDied","Data":"6b4e5502ef289fa593860760edf7a04544c9a7ed558504a3360a8cd42b86ab07"} Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.728473 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b4e5502ef289fa593860760edf7a04544c9a7ed558504a3360a8cd42b86ab07" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.728566 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kk9rm" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.731433 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2b17-account-create-update-dhdlp" event={"ID":"0dec7952-ffcd-45f1-b788-669b9a76f577","Type":"ContainerDied","Data":"acf24a8a9904ab31fc2fe7792c70e15aed5b37c9f0b2fd9ebd55c66877555484"} Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.731493 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acf24a8a9904ab31fc2fe7792c70e15aed5b37c9f0b2fd9ebd55c66877555484" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.731576 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2b17-account-create-update-dhdlp" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.734595 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9966-account-create-update-6d4f6" event={"ID":"3edce3d3-96e5-4fbe-8ef7-ba2d01d06025","Type":"ContainerDied","Data":"9aa5c9467a35ac915af0a6eaecb8db5187b1061079e5b2acb4fb8f042a82dcb4"} Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.734618 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9aa5c9467a35ac915af0a6eaecb8db5187b1061079e5b2acb4fb8f042a82dcb4" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.734943 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9966-account-create-update-6d4f6" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.739821 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-v2rmf" event={"ID":"fae1f9f4-459b-4894-ba4c-db79218e7fb0","Type":"ContainerDied","Data":"f62be59c65c9470a39064abdefccaa5600c16442f678e9e6169577910b519cb4"} Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.739865 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f62be59c65c9470a39064abdefccaa5600c16442f678e9e6169577910b519cb4" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.739909 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-v2rmf" Feb 27 19:07:40 crc kubenswrapper[4981]: I0227 19:07:40.747867 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsp2z\" (UniqueName: \"kubernetes.io/projected/3edce3d3-96e5-4fbe-8ef7-ba2d01d06025-kube-api-access-qsp2z\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.025916 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-flk8g"] Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.110953 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e72a-account-create-update-p99q6"] Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.132650 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-zxxsl"] Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.139075 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.641916 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e5852d1-68b9-4547-8cc3-b3170875303f" path="/var/lib/kubelet/pods/9e5852d1-68b9-4547-8cc3-b3170875303f/volumes" Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.753333 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f928877c-eaff-4ab4-ae3b-ba6ed721642c","Type":"ContainerStarted","Data":"b17d1c158ee9a02d955c961d36f3778f1d0ce99cc8890e879aaabb3483dbe8a8"} Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.753578 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.756566 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3","Type":"ContainerStarted","Data":"57f6f61d81b4bb62c7f39a3b1be260072a8b0b4fe66cc915fa2a92ab863c30e7"} Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.756925 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.759518 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"073fb193-6587-4c6c-b20d-82a5b3075a20","Type":"ContainerStarted","Data":"290ab13d1e06be99faf848d99a57b28cdaead931e71465163c086723480eca80"} Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.760680 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.767332 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-flk8g" event={"ID":"619b6b33-ff2d-4b2c-984e-f1c65bfcdffd","Type":"ContainerStarted","Data":"1d4173a980b74b2b82881942d9c0606a560f6be487e857f7c7eadf0f82dd0572"} Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.767573 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-flk8g" event={"ID":"619b6b33-ff2d-4b2c-984e-f1c65bfcdffd","Type":"ContainerStarted","Data":"9cc6d706d4068116c6be949061ccb4f6daafe360931b7106052097862ce54732"} Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.772593 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e72a-account-create-update-p99q6" event={"ID":"f541ace2-11c0-4232-b34b-d7079bfc597b","Type":"ContainerStarted","Data":"26e39f1f6bf137b20eaa6d22829be4fae92fac625401e0025afb0e9f99e16dea"} Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.772627 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e72a-account-create-update-p99q6" event={"ID":"f541ace2-11c0-4232-b34b-d7079bfc597b","Type":"ContainerStarted","Data":"aeee5d7c801ce0dcebdc8e91567d431a4fa17276e7fbf38a67ee02170ccfce22"} Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.775195 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j6c5h" event={"ID":"fcfd5f62-e6b9-4a63-8030-df81c9d7b580","Type":"ContainerStarted","Data":"52292a3a2c1906d86541be54cde391b3e2ea44195dfca89da85c7b92391c2d63"} Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.776867 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-zxxsl" event={"ID":"40cbb61c-1aa8-4477-8579-76699afce28b","Type":"ContainerStarted","Data":"e495f30379ea5eec7fcfc406ec77e013036179cba6329049e46a388769876ad2"} Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.776975 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-zxxsl" event={"ID":"40cbb61c-1aa8-4477-8579-76699afce28b","Type":"ContainerStarted","Data":"352cc56b9c719f88a23a300ba049cc9d6b5a2fc8c1b42e34708ad459a40e4232"} Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.777895 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d923459f-90f4-4399-80a0-4e22daa1eadf","Type":"ContainerStarted","Data":"c75e74454c171b9dd36d124c75f8cb0433369d53595579de8223945018970929"} Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.801394 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.11467092 podStartE2EDuration="1m35.801373688s" podCreationTimestamp="2026-02-27 19:06:06 +0000 UTC" firstStartedPulling="2026-02-27 19:06:08.822760051 +0000 UTC m=+1268.301541211" lastFinishedPulling="2026-02-27 19:07:03.509462819 +0000 UTC m=+1322.988243979" observedRunningTime="2026-02-27 19:07:41.794464415 +0000 UTC m=+1361.273245585" watchObservedRunningTime="2026-02-27 19:07:41.801373688 +0000 UTC m=+1361.280154848" Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.817701 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-j6c5h" podStartSLOduration=3.216510896 podStartE2EDuration="12.817682182s" podCreationTimestamp="2026-02-27 19:07:29 +0000 UTC" firstStartedPulling="2026-02-27 19:07:30.946394649 +0000 UTC m=+1350.425175809" lastFinishedPulling="2026-02-27 19:07:40.547565935 +0000 UTC m=+1360.026347095" observedRunningTime="2026-02-27 19:07:41.807501338 +0000 UTC m=+1361.286282498" watchObservedRunningTime="2026-02-27 19:07:41.817682182 +0000 UTC m=+1361.296463342" Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.829296 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-flk8g" podStartSLOduration=1.82928032 podStartE2EDuration="1.82928032s" podCreationTimestamp="2026-02-27 19:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:07:41.822245952 +0000 UTC m=+1361.301027112" watchObservedRunningTime="2026-02-27 19:07:41.82928032 +0000 UTC m=+1361.308061480" Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.835833 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-zxxsl" podStartSLOduration=2.835821472 podStartE2EDuration="2.835821472s" podCreationTimestamp="2026-02-27 19:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:07:41.834210242 +0000 UTC m=+1361.312991402" watchObservedRunningTime="2026-02-27 19:07:41.835821472 +0000 UTC m=+1361.314602632" Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.889973 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-e72a-account-create-update-p99q6" podStartSLOduration=2.889956722 podStartE2EDuration="2.889956722s" podCreationTimestamp="2026-02-27 19:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:07:41.882650647 +0000 UTC m=+1361.361431807" watchObservedRunningTime="2026-02-27 19:07:41.889956722 +0000 UTC m=+1361.368737882" Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.892738 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.112930208 podStartE2EDuration="1m34.892729238s" podCreationTimestamp="2026-02-27 19:06:07 +0000 UTC" firstStartedPulling="2026-02-27 19:06:09.665443318 +0000 UTC m=+1269.144224478" lastFinishedPulling="2026-02-27 19:07:03.445242348 +0000 UTC m=+1322.924023508" observedRunningTime="2026-02-27 19:07:41.869839811 +0000 UTC m=+1361.348620971" watchObservedRunningTime="2026-02-27 19:07:41.892729238 +0000 UTC m=+1361.371510398" Feb 27 19:07:41 crc kubenswrapper[4981]: I0227 19:07:41.906929 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.51810052 podStartE2EDuration="1m28.906908436s" podCreationTimestamp="2026-02-27 19:06:13 +0000 UTC" firstStartedPulling="2026-02-27 19:06:14.200447699 +0000 UTC m=+1273.679228859" lastFinishedPulling="2026-02-27 19:07:40.589255615 +0000 UTC m=+1360.068036775" observedRunningTime="2026-02-27 19:07:41.902929614 +0000 UTC m=+1361.381710774" watchObservedRunningTime="2026-02-27 19:07:41.906908436 +0000 UTC m=+1361.385689596" Feb 27 19:07:42 crc kubenswrapper[4981]: I0227 19:07:42.174638 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:42 crc kubenswrapper[4981]: E0227 19:07:42.174809 4981 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 19:07:42 crc kubenswrapper[4981]: E0227 19:07:42.174826 4981 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 19:07:42 crc kubenswrapper[4981]: E0227 19:07:42.174875 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift podName:c9c5bb1a-80fb-459f-acb9-e3751c60f684 nodeName:}" failed. No retries permitted until 2026-02-27 19:07:58.174862137 +0000 UTC m=+1377.653643297 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift") pod "swift-storage-0" (UID: "c9c5bb1a-80fb-459f-acb9-e3751c60f684") : configmap "swift-ring-files" not found Feb 27 19:07:42 crc kubenswrapper[4981]: I0227 19:07:42.813151 4981 generic.go:334] "Generic (PLEG): container finished" podID="40cbb61c-1aa8-4477-8579-76699afce28b" containerID="e495f30379ea5eec7fcfc406ec77e013036179cba6329049e46a388769876ad2" exitCode=0 Feb 27 19:07:42 crc kubenswrapper[4981]: I0227 19:07:42.814677 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-zxxsl" event={"ID":"40cbb61c-1aa8-4477-8579-76699afce28b","Type":"ContainerDied","Data":"e495f30379ea5eec7fcfc406ec77e013036179cba6329049e46a388769876ad2"} Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.180220 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-n5d2t" podUID="214d65cb-9030-4093-853c-c1485fc1a30a" containerName="ovn-controller" probeResult="failure" output=< Feb 27 19:07:43 crc kubenswrapper[4981]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 27 19:07:43 crc kubenswrapper[4981]: > Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.208984 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.425584 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-n5d2t-config-s2l64"] Feb 27 19:07:43 crc kubenswrapper[4981]: E0227 19:07:43.425979 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae1f9f4-459b-4894-ba4c-db79218e7fb0" containerName="mariadb-database-create" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.426001 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae1f9f4-459b-4894-ba4c-db79218e7fb0" containerName="mariadb-database-create" Feb 27 19:07:43 crc kubenswrapper[4981]: E0227 19:07:43.426022 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5088f9d-7a73-4e90-bb3e-b66bc16b840f" containerName="mariadb-database-create" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.426030 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5088f9d-7a73-4e90-bb3e-b66bc16b840f" containerName="mariadb-database-create" Feb 27 19:07:43 crc kubenswrapper[4981]: E0227 19:07:43.426068 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3edce3d3-96e5-4fbe-8ef7-ba2d01d06025" containerName="mariadb-account-create-update" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.426080 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="3edce3d3-96e5-4fbe-8ef7-ba2d01d06025" containerName="mariadb-account-create-update" Feb 27 19:07:43 crc kubenswrapper[4981]: E0227 19:07:43.426089 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dec7952-ffcd-45f1-b788-669b9a76f577" containerName="mariadb-account-create-update" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.426097 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dec7952-ffcd-45f1-b788-669b9a76f577" containerName="mariadb-account-create-update" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.426324 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="3edce3d3-96e5-4fbe-8ef7-ba2d01d06025" containerName="mariadb-account-create-update" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.426342 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dec7952-ffcd-45f1-b788-669b9a76f577" containerName="mariadb-account-create-update" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.426357 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5088f9d-7a73-4e90-bb3e-b66bc16b840f" containerName="mariadb-database-create" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.426373 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="fae1f9f4-459b-4894-ba4c-db79218e7fb0" containerName="mariadb-database-create" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.426984 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.433381 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.446750 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n5d2t-config-s2l64"] Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.511253 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-var-log-ovn\") pod \"ovn-controller-n5d2t-config-s2l64\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.511355 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-var-run-ovn\") pod \"ovn-controller-n5d2t-config-s2l64\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.511474 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-scripts\") pod \"ovn-controller-n5d2t-config-s2l64\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.511520 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-var-run\") pod \"ovn-controller-n5d2t-config-s2l64\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.511563 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-additional-scripts\") pod \"ovn-controller-n5d2t-config-s2l64\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.511584 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbkqj\" (UniqueName: \"kubernetes.io/projected/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-kube-api-access-tbkqj\") pod \"ovn-controller-n5d2t-config-s2l64\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.613326 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-var-log-ovn\") pod \"ovn-controller-n5d2t-config-s2l64\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.613380 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-var-run-ovn\") pod \"ovn-controller-n5d2t-config-s2l64\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.613463 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-scripts\") pod \"ovn-controller-n5d2t-config-s2l64\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.613499 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-var-run\") pod \"ovn-controller-n5d2t-config-s2l64\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.613527 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-additional-scripts\") pod \"ovn-controller-n5d2t-config-s2l64\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.613545 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbkqj\" (UniqueName: \"kubernetes.io/projected/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-kube-api-access-tbkqj\") pod \"ovn-controller-n5d2t-config-s2l64\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.613716 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-var-run\") pod \"ovn-controller-n5d2t-config-s2l64\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.613695 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-var-run-ovn\") pod \"ovn-controller-n5d2t-config-s2l64\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.613829 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-var-log-ovn\") pod \"ovn-controller-n5d2t-config-s2l64\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.614553 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-additional-scripts\") pod \"ovn-controller-n5d2t-config-s2l64\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.615518 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-scripts\") pod \"ovn-controller-n5d2t-config-s2l64\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.635089 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbkqj\" (UniqueName: \"kubernetes.io/projected/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-kube-api-access-tbkqj\") pod \"ovn-controller-n5d2t-config-s2l64\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.748406 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.854873 4981 generic.go:334] "Generic (PLEG): container finished" podID="619b6b33-ff2d-4b2c-984e-f1c65bfcdffd" containerID="1d4173a980b74b2b82881942d9c0606a560f6be487e857f7c7eadf0f82dd0572" exitCode=0 Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.854942 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-flk8g" event={"ID":"619b6b33-ff2d-4b2c-984e-f1c65bfcdffd","Type":"ContainerDied","Data":"1d4173a980b74b2b82881942d9c0606a560f6be487e857f7c7eadf0f82dd0572"} Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.858542 4981 generic.go:334] "Generic (PLEG): container finished" podID="f541ace2-11c0-4232-b34b-d7079bfc597b" containerID="26e39f1f6bf137b20eaa6d22829be4fae92fac625401e0025afb0e9f99e16dea" exitCode=0 Feb 27 19:07:43 crc kubenswrapper[4981]: I0227 19:07:43.858643 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e72a-account-create-update-p99q6" event={"ID":"f541ace2-11c0-4232-b34b-d7079bfc597b","Type":"ContainerDied","Data":"26e39f1f6bf137b20eaa6d22829be4fae92fac625401e0025afb0e9f99e16dea"} Feb 27 19:07:44 crc kubenswrapper[4981]: I0227 19:07:44.301680 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-n5d2t-config-s2l64"] Feb 27 19:07:44 crc kubenswrapper[4981]: I0227 19:07:44.305788 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-zxxsl" Feb 27 19:07:44 crc kubenswrapper[4981]: I0227 19:07:44.430270 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkb7b\" (UniqueName: \"kubernetes.io/projected/40cbb61c-1aa8-4477-8579-76699afce28b-kube-api-access-lkb7b\") pod \"40cbb61c-1aa8-4477-8579-76699afce28b\" (UID: \"40cbb61c-1aa8-4477-8579-76699afce28b\") " Feb 27 19:07:44 crc kubenswrapper[4981]: I0227 19:07:44.430373 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40cbb61c-1aa8-4477-8579-76699afce28b-operator-scripts\") pod \"40cbb61c-1aa8-4477-8579-76699afce28b\" (UID: \"40cbb61c-1aa8-4477-8579-76699afce28b\") " Feb 27 19:07:44 crc kubenswrapper[4981]: I0227 19:07:44.431845 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40cbb61c-1aa8-4477-8579-76699afce28b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40cbb61c-1aa8-4477-8579-76699afce28b" (UID: "40cbb61c-1aa8-4477-8579-76699afce28b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:44 crc kubenswrapper[4981]: I0227 19:07:44.532258 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40cbb61c-1aa8-4477-8579-76699afce28b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:44 crc kubenswrapper[4981]: I0227 19:07:44.562766 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:07:44 crc kubenswrapper[4981]: I0227 19:07:44.612676 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kh2tb"] Feb 27 19:07:44 crc kubenswrapper[4981]: I0227 19:07:44.613082 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" podUID="bec5bfe1-9b74-494e-92e6-6482c06995b9" containerName="dnsmasq-dns" containerID="cri-o://3628d94c5816c499aa00ef37464a0e2d10ee1c7be9bdc0223c85be0c216783e0" gracePeriod=10 Feb 27 19:07:44 crc kubenswrapper[4981]: I0227 19:07:44.867912 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n5d2t-config-s2l64" event={"ID":"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8","Type":"ContainerStarted","Data":"98adcdde79847cc9e5a84f7006f02af4a9d9be542d43fbb70281cbb18ccc43ea"} Feb 27 19:07:44 crc kubenswrapper[4981]: I0227 19:07:44.869692 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-zxxsl" event={"ID":"40cbb61c-1aa8-4477-8579-76699afce28b","Type":"ContainerDied","Data":"352cc56b9c719f88a23a300ba049cc9d6b5a2fc8c1b42e34708ad459a40e4232"} Feb 27 19:07:44 crc kubenswrapper[4981]: I0227 19:07:44.869715 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="352cc56b9c719f88a23a300ba049cc9d6b5a2fc8c1b42e34708ad459a40e4232" Feb 27 19:07:44 crc kubenswrapper[4981]: I0227 19:07:44.869762 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-zxxsl" Feb 27 19:07:44 crc kubenswrapper[4981]: I0227 19:07:44.872522 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d923459f-90f4-4399-80a0-4e22daa1eadf","Type":"ContainerStarted","Data":"3b617bffdd5cc1d450fde9acb69cf5146fe9369c179986b8ac76e6fe9affd265"} Feb 27 19:07:44 crc kubenswrapper[4981]: I0227 19:07:44.872543 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d923459f-90f4-4399-80a0-4e22daa1eadf","Type":"ContainerStarted","Data":"63d0d07ec18342868dff13620687889bed168fed03e7ed3e8bab9795de7f6b30"} Feb 27 19:07:44 crc kubenswrapper[4981]: I0227 19:07:44.872651 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.140875 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40cbb61c-1aa8-4477-8579-76699afce28b-kube-api-access-lkb7b" (OuterVolumeSpecName: "kube-api-access-lkb7b") pod "40cbb61c-1aa8-4477-8579-76699afce28b" (UID: "40cbb61c-1aa8-4477-8579-76699afce28b"). InnerVolumeSpecName "kube-api-access-lkb7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.143429 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkb7b\" (UniqueName: \"kubernetes.io/projected/40cbb61c-1aa8-4477-8579-76699afce28b-kube-api-access-lkb7b\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.342521 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=7.574135924 podStartE2EDuration="9.342498061s" podCreationTimestamp="2026-02-27 19:07:36 +0000 UTC" firstStartedPulling="2026-02-27 19:07:41.149426544 +0000 UTC m=+1360.628207704" lastFinishedPulling="2026-02-27 19:07:42.917788661 +0000 UTC m=+1362.396569841" observedRunningTime="2026-02-27 19:07:44.899145084 +0000 UTC m=+1364.377926244" watchObservedRunningTime="2026-02-27 19:07:45.342498061 +0000 UTC m=+1364.821279241" Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.486822 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-flk8g" Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.651010 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg92k\" (UniqueName: \"kubernetes.io/projected/619b6b33-ff2d-4b2c-984e-f1c65bfcdffd-kube-api-access-zg92k\") pod \"619b6b33-ff2d-4b2c-984e-f1c65bfcdffd\" (UID: \"619b6b33-ff2d-4b2c-984e-f1c65bfcdffd\") " Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.651781 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/619b6b33-ff2d-4b2c-984e-f1c65bfcdffd-operator-scripts\") pod \"619b6b33-ff2d-4b2c-984e-f1c65bfcdffd\" (UID: \"619b6b33-ff2d-4b2c-984e-f1c65bfcdffd\") " Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.652721 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/619b6b33-ff2d-4b2c-984e-f1c65bfcdffd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "619b6b33-ff2d-4b2c-984e-f1c65bfcdffd" (UID: "619b6b33-ff2d-4b2c-984e-f1c65bfcdffd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.652943 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/619b6b33-ff2d-4b2c-984e-f1c65bfcdffd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.660225 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/619b6b33-ff2d-4b2c-984e-f1c65bfcdffd-kube-api-access-zg92k" (OuterVolumeSpecName: "kube-api-access-zg92k") pod "619b6b33-ff2d-4b2c-984e-f1c65bfcdffd" (UID: "619b6b33-ff2d-4b2c-984e-f1c65bfcdffd"). InnerVolumeSpecName "kube-api-access-zg92k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.719567 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e72a-account-create-update-p99q6" Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.754524 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg92k\" (UniqueName: \"kubernetes.io/projected/619b6b33-ff2d-4b2c-984e-f1c65bfcdffd-kube-api-access-zg92k\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.855999 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bxd5\" (UniqueName: \"kubernetes.io/projected/f541ace2-11c0-4232-b34b-d7079bfc597b-kube-api-access-5bxd5\") pod \"f541ace2-11c0-4232-b34b-d7079bfc597b\" (UID: \"f541ace2-11c0-4232-b34b-d7079bfc597b\") " Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.856090 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f541ace2-11c0-4232-b34b-d7079bfc597b-operator-scripts\") pod \"f541ace2-11c0-4232-b34b-d7079bfc597b\" (UID: \"f541ace2-11c0-4232-b34b-d7079bfc597b\") " Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.856684 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f541ace2-11c0-4232-b34b-d7079bfc597b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f541ace2-11c0-4232-b34b-d7079bfc597b" (UID: "f541ace2-11c0-4232-b34b-d7079bfc597b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.859105 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f541ace2-11c0-4232-b34b-d7079bfc597b-kube-api-access-5bxd5" (OuterVolumeSpecName: "kube-api-access-5bxd5") pod "f541ace2-11c0-4232-b34b-d7079bfc597b" (UID: "f541ace2-11c0-4232-b34b-d7079bfc597b"). InnerVolumeSpecName "kube-api-access-5bxd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.879908 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-flk8g" event={"ID":"619b6b33-ff2d-4b2c-984e-f1c65bfcdffd","Type":"ContainerDied","Data":"9cc6d706d4068116c6be949061ccb4f6daafe360931b7106052097862ce54732"} Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.879972 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cc6d706d4068116c6be949061ccb4f6daafe360931b7106052097862ce54732" Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.879928 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-flk8g" Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.881610 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e72a-account-create-update-p99q6" event={"ID":"f541ace2-11c0-4232-b34b-d7079bfc597b","Type":"ContainerDied","Data":"aeee5d7c801ce0dcebdc8e91567d431a4fa17276e7fbf38a67ee02170ccfce22"} Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.881800 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e72a-account-create-update-p99q6" Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.881810 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aeee5d7c801ce0dcebdc8e91567d431a4fa17276e7fbf38a67ee02170ccfce22" Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.885733 4981 generic.go:334] "Generic (PLEG): container finished" podID="bec5bfe1-9b74-494e-92e6-6482c06995b9" containerID="3628d94c5816c499aa00ef37464a0e2d10ee1c7be9bdc0223c85be0c216783e0" exitCode=0 Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.885813 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" event={"ID":"bec5bfe1-9b74-494e-92e6-6482c06995b9","Type":"ContainerDied","Data":"3628d94c5816c499aa00ef37464a0e2d10ee1c7be9bdc0223c85be0c216783e0"} Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.961898 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bxd5\" (UniqueName: \"kubernetes.io/projected/f541ace2-11c0-4232-b34b-d7079bfc597b-kube-api-access-5bxd5\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:45 crc kubenswrapper[4981]: I0227 19:07:45.961969 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f541ace2-11c0-4232-b34b-d7079bfc597b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:48 crc kubenswrapper[4981]: I0227 19:07:48.165631 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-n5d2t" podUID="214d65cb-9030-4093-853c-c1485fc1a30a" containerName="ovn-controller" probeResult="failure" output=< Feb 27 19:07:48 crc kubenswrapper[4981]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 27 19:07:48 crc kubenswrapper[4981]: > Feb 27 19:07:48 crc kubenswrapper[4981]: I0227 19:07:48.205897 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" podUID="bec5bfe1-9b74-494e-92e6-6482c06995b9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: connect: connection refused" Feb 27 19:07:49 crc kubenswrapper[4981]: I0227 19:07:49.922279 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n5d2t-config-s2l64" event={"ID":"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8","Type":"ContainerStarted","Data":"461af97dfefde8d5e1d889fe3e1633599ffdbd87f9acf06cf35be9d51671319d"} Feb 27 19:07:49 crc kubenswrapper[4981]: I0227 19:07:49.991561 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-n5d2t-config-s2l64" podStartSLOduration=6.991536804 podStartE2EDuration="6.991536804s" podCreationTimestamp="2026-02-27 19:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:07:49.943322906 +0000 UTC m=+1369.422104076" watchObservedRunningTime="2026-02-27 19:07:49.991536804 +0000 UTC m=+1369.470317964" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.134291 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-tgsfm"] Feb 27 19:07:50 crc kubenswrapper[4981]: E0227 19:07:50.134936 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40cbb61c-1aa8-4477-8579-76699afce28b" containerName="mariadb-database-create" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.134955 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="40cbb61c-1aa8-4477-8579-76699afce28b" containerName="mariadb-database-create" Feb 27 19:07:50 crc kubenswrapper[4981]: E0227 19:07:50.134973 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f541ace2-11c0-4232-b34b-d7079bfc597b" containerName="mariadb-account-create-update" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.134981 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f541ace2-11c0-4232-b34b-d7079bfc597b" containerName="mariadb-account-create-update" Feb 27 19:07:50 crc kubenswrapper[4981]: E0227 19:07:50.134996 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619b6b33-ff2d-4b2c-984e-f1c65bfcdffd" containerName="mariadb-account-create-update" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.135004 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="619b6b33-ff2d-4b2c-984e-f1c65bfcdffd" containerName="mariadb-account-create-update" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.135214 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f541ace2-11c0-4232-b34b-d7079bfc597b" containerName="mariadb-account-create-update" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.135237 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="40cbb61c-1aa8-4477-8579-76699afce28b" containerName="mariadb-database-create" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.135258 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="619b6b33-ff2d-4b2c-984e-f1c65bfcdffd" containerName="mariadb-account-create-update" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.135865 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tgsfm" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.140084 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.140269 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fm8mn" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.147142 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tgsfm"] Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.206699 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81024b85-8686-478d-b17e-7c599561675b-config-data\") pod \"glance-db-sync-tgsfm\" (UID: \"81024b85-8686-478d-b17e-7c599561675b\") " pod="openstack/glance-db-sync-tgsfm" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.206821 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/81024b85-8686-478d-b17e-7c599561675b-db-sync-config-data\") pod \"glance-db-sync-tgsfm\" (UID: \"81024b85-8686-478d-b17e-7c599561675b\") " pod="openstack/glance-db-sync-tgsfm" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.206868 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfkv6\" (UniqueName: \"kubernetes.io/projected/81024b85-8686-478d-b17e-7c599561675b-kube-api-access-cfkv6\") pod \"glance-db-sync-tgsfm\" (UID: \"81024b85-8686-478d-b17e-7c599561675b\") " pod="openstack/glance-db-sync-tgsfm" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.206887 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81024b85-8686-478d-b17e-7c599561675b-combined-ca-bundle\") pod \"glance-db-sync-tgsfm\" (UID: \"81024b85-8686-478d-b17e-7c599561675b\") " pod="openstack/glance-db-sync-tgsfm" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.253656 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.253699 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.270266 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.309350 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bec5bfe1-9b74-494e-92e6-6482c06995b9-dns-svc\") pod \"bec5bfe1-9b74-494e-92e6-6482c06995b9\" (UID: \"bec5bfe1-9b74-494e-92e6-6482c06995b9\") " Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.309417 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bec5bfe1-9b74-494e-92e6-6482c06995b9-config\") pod \"bec5bfe1-9b74-494e-92e6-6482c06995b9\" (UID: \"bec5bfe1-9b74-494e-92e6-6482c06995b9\") " Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.309450 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bec5bfe1-9b74-494e-92e6-6482c06995b9-ovsdbserver-nb\") pod \"bec5bfe1-9b74-494e-92e6-6482c06995b9\" (UID: \"bec5bfe1-9b74-494e-92e6-6482c06995b9\") " Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.309481 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klkb8\" (UniqueName: \"kubernetes.io/projected/bec5bfe1-9b74-494e-92e6-6482c06995b9-kube-api-access-klkb8\") pod \"bec5bfe1-9b74-494e-92e6-6482c06995b9\" (UID: \"bec5bfe1-9b74-494e-92e6-6482c06995b9\") " Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.310453 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/81024b85-8686-478d-b17e-7c599561675b-db-sync-config-data\") pod \"glance-db-sync-tgsfm\" (UID: \"81024b85-8686-478d-b17e-7c599561675b\") " pod="openstack/glance-db-sync-tgsfm" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.310490 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfkv6\" (UniqueName: \"kubernetes.io/projected/81024b85-8686-478d-b17e-7c599561675b-kube-api-access-cfkv6\") pod \"glance-db-sync-tgsfm\" (UID: \"81024b85-8686-478d-b17e-7c599561675b\") " pod="openstack/glance-db-sync-tgsfm" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.310506 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81024b85-8686-478d-b17e-7c599561675b-combined-ca-bundle\") pod \"glance-db-sync-tgsfm\" (UID: \"81024b85-8686-478d-b17e-7c599561675b\") " pod="openstack/glance-db-sync-tgsfm" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.310572 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81024b85-8686-478d-b17e-7c599561675b-config-data\") pod \"glance-db-sync-tgsfm\" (UID: \"81024b85-8686-478d-b17e-7c599561675b\") " pod="openstack/glance-db-sync-tgsfm" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.315073 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/81024b85-8686-478d-b17e-7c599561675b-db-sync-config-data\") pod \"glance-db-sync-tgsfm\" (UID: \"81024b85-8686-478d-b17e-7c599561675b\") " pod="openstack/glance-db-sync-tgsfm" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.315908 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec5bfe1-9b74-494e-92e6-6482c06995b9-kube-api-access-klkb8" (OuterVolumeSpecName: "kube-api-access-klkb8") pod "bec5bfe1-9b74-494e-92e6-6482c06995b9" (UID: "bec5bfe1-9b74-494e-92e6-6482c06995b9"). InnerVolumeSpecName "kube-api-access-klkb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.318517 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81024b85-8686-478d-b17e-7c599561675b-combined-ca-bundle\") pod \"glance-db-sync-tgsfm\" (UID: \"81024b85-8686-478d-b17e-7c599561675b\") " pod="openstack/glance-db-sync-tgsfm" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.322036 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81024b85-8686-478d-b17e-7c599561675b-config-data\") pod \"glance-db-sync-tgsfm\" (UID: \"81024b85-8686-478d-b17e-7c599561675b\") " pod="openstack/glance-db-sync-tgsfm" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.346807 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfkv6\" (UniqueName: \"kubernetes.io/projected/81024b85-8686-478d-b17e-7c599561675b-kube-api-access-cfkv6\") pod \"glance-db-sync-tgsfm\" (UID: \"81024b85-8686-478d-b17e-7c599561675b\") " pod="openstack/glance-db-sync-tgsfm" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.373930 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec5bfe1-9b74-494e-92e6-6482c06995b9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bec5bfe1-9b74-494e-92e6-6482c06995b9" (UID: "bec5bfe1-9b74-494e-92e6-6482c06995b9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.385623 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec5bfe1-9b74-494e-92e6-6482c06995b9-config" (OuterVolumeSpecName: "config") pod "bec5bfe1-9b74-494e-92e6-6482c06995b9" (UID: "bec5bfe1-9b74-494e-92e6-6482c06995b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.386247 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec5bfe1-9b74-494e-92e6-6482c06995b9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bec5bfe1-9b74-494e-92e6-6482c06995b9" (UID: "bec5bfe1-9b74-494e-92e6-6482c06995b9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.411697 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bec5bfe1-9b74-494e-92e6-6482c06995b9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.411727 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bec5bfe1-9b74-494e-92e6-6482c06995b9-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.411738 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bec5bfe1-9b74-494e-92e6-6482c06995b9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.411749 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klkb8\" (UniqueName: \"kubernetes.io/projected/bec5bfe1-9b74-494e-92e6-6482c06995b9-kube-api-access-klkb8\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.570621 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tgsfm" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.931602 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" event={"ID":"bec5bfe1-9b74-494e-92e6-6482c06995b9","Type":"ContainerDied","Data":"718deef57c61884e17af1d31235adceab41647fafef506096ca012123183087e"} Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.931650 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-kh2tb" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.931930 4981 scope.go:117] "RemoveContainer" containerID="3628d94c5816c499aa00ef37464a0e2d10ee1c7be9bdc0223c85be0c216783e0" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.933587 4981 generic.go:334] "Generic (PLEG): container finished" podID="4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8" containerID="461af97dfefde8d5e1d889fe3e1633599ffdbd87f9acf06cf35be9d51671319d" exitCode=0 Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.933649 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n5d2t-config-s2l64" event={"ID":"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8","Type":"ContainerDied","Data":"461af97dfefde8d5e1d889fe3e1633599ffdbd87f9acf06cf35be9d51671319d"} Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.957574 4981 scope.go:117] "RemoveContainer" containerID="14230eb3667a747b265f98dd4b8981a8801be4fef7075b9f388f5993e58f4075" Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.986590 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kh2tb"] Feb 27 19:07:50 crc kubenswrapper[4981]: I0227 19:07:50.997886 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-kh2tb"] Feb 27 19:07:51 crc kubenswrapper[4981]: I0227 19:07:51.126230 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tgsfm"] Feb 27 19:07:53 crc kubenswrapper[4981]: I0227 19:07:53.497345 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bec5bfe1-9b74-494e-92e6-6482c06995b9" path="/var/lib/kubelet/pods/bec5bfe1-9b74-494e-92e6-6482c06995b9/volumes" Feb 27 19:07:53 crc kubenswrapper[4981]: I0227 19:07:53.516033 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tgsfm" event={"ID":"81024b85-8686-478d-b17e-7c599561675b","Type":"ContainerStarted","Data":"c6f5609ce7040998b69d84a08d753866c1be7d2d4264488ec4816b541b55dbb1"} Feb 27 19:07:53 crc kubenswrapper[4981]: I0227 19:07:53.541738 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-n5d2t" Feb 27 19:07:53 crc kubenswrapper[4981]: I0227 19:07:53.888792 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:53 crc kubenswrapper[4981]: I0227 19:07:53.959731 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.014614 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbkqj\" (UniqueName: \"kubernetes.io/projected/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-kube-api-access-tbkqj\") pod \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.014847 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-var-run-ovn\") pod \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.014951 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8" (UID: "4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.015107 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-additional-scripts\") pod \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.015193 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-var-log-ovn\") pod \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.015384 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-scripts\") pod \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.015475 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-var-run\") pod \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\" (UID: \"4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8\") " Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.015392 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8" (UID: "4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.015562 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-var-run" (OuterVolumeSpecName: "var-run") pod "4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8" (UID: "4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.016072 4981 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-var-run\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.016146 4981 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.016202 4981 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.016274 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-scripts" (OuterVolumeSpecName: "scripts") pod "4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8" (UID: "4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.016499 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8" (UID: "4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.022783 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-kube-api-access-tbkqj" (OuterVolumeSpecName: "kube-api-access-tbkqj") pod "4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8" (UID: "4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8"). InnerVolumeSpecName "kube-api-access-tbkqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.119382 4981 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.119462 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.119478 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbkqj\" (UniqueName: \"kubernetes.io/projected/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8-kube-api-access-tbkqj\") on node \"crc\" DevicePath \"\"" Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.475238 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-n5d2t-config-s2l64"] Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.482454 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-n5d2t-config-s2l64"] Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.523928 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98adcdde79847cc9e5a84f7006f02af4a9d9be542d43fbb70281cbb18ccc43ea" Feb 27 19:07:54 crc kubenswrapper[4981]: I0227 19:07:54.524015 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n5d2t-config-s2l64" Feb 27 19:07:55 crc kubenswrapper[4981]: I0227 19:07:55.637268 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8" path="/var/lib/kubelet/pods/4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8/volumes" Feb 27 19:07:57 crc kubenswrapper[4981]: I0227 19:07:57.496686 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 27 19:07:58 crc kubenswrapper[4981]: I0227 19:07:58.192880 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:58 crc kubenswrapper[4981]: I0227 19:07:58.209334 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift\") pod \"swift-storage-0\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " pod="openstack/swift-storage-0" Feb 27 19:07:58 crc kubenswrapper[4981]: I0227 19:07:58.343193 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 27 19:07:58 crc kubenswrapper[4981]: I0227 19:07:58.462016 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 27 19:08:00 crc kubenswrapper[4981]: I0227 19:08:00.084397 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 27 19:08:00 crc kubenswrapper[4981]: I0227 19:08:00.147893 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536988-bmwsv"] Feb 27 19:08:00 crc kubenswrapper[4981]: E0227 19:08:00.148411 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec5bfe1-9b74-494e-92e6-6482c06995b9" containerName="dnsmasq-dns" Feb 27 19:08:00 crc kubenswrapper[4981]: I0227 19:08:00.148428 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec5bfe1-9b74-494e-92e6-6482c06995b9" containerName="dnsmasq-dns" Feb 27 19:08:00 crc kubenswrapper[4981]: E0227 19:08:00.148443 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec5bfe1-9b74-494e-92e6-6482c06995b9" containerName="init" Feb 27 19:08:00 crc kubenswrapper[4981]: I0227 19:08:00.148451 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec5bfe1-9b74-494e-92e6-6482c06995b9" containerName="init" Feb 27 19:08:00 crc kubenswrapper[4981]: E0227 19:08:00.148476 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8" containerName="ovn-config" Feb 27 19:08:00 crc kubenswrapper[4981]: I0227 19:08:00.148482 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8" containerName="ovn-config" Feb 27 19:08:00 crc kubenswrapper[4981]: I0227 19:08:00.148625 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="bec5bfe1-9b74-494e-92e6-6482c06995b9" containerName="dnsmasq-dns" Feb 27 19:08:00 crc kubenswrapper[4981]: I0227 19:08:00.148635 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ac1ae7d-238c-4d64-bd97-5d4d6e1758c8" containerName="ovn-config" Feb 27 19:08:00 crc kubenswrapper[4981]: I0227 19:08:00.149235 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536988-bmwsv" Feb 27 19:08:00 crc kubenswrapper[4981]: I0227 19:08:00.151514 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:08:00 crc kubenswrapper[4981]: I0227 19:08:00.153101 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:08:00 crc kubenswrapper[4981]: I0227 19:08:00.153528 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:08:00 crc kubenswrapper[4981]: I0227 19:08:00.261497 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85bbb\" (UniqueName: \"kubernetes.io/projected/5437f2b0-3e2f-434e-a1b2-a152345065a5-kube-api-access-85bbb\") pod \"auto-csr-approver-29536988-bmwsv\" (UID: \"5437f2b0-3e2f-434e-a1b2-a152345065a5\") " pod="openshift-infra/auto-csr-approver-29536988-bmwsv" Feb 27 19:08:00 crc kubenswrapper[4981]: I0227 19:08:00.363682 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85bbb\" (UniqueName: \"kubernetes.io/projected/5437f2b0-3e2f-434e-a1b2-a152345065a5-kube-api-access-85bbb\") pod \"auto-csr-approver-29536988-bmwsv\" (UID: \"5437f2b0-3e2f-434e-a1b2-a152345065a5\") " pod="openshift-infra/auto-csr-approver-29536988-bmwsv" Feb 27 19:08:00 crc kubenswrapper[4981]: I0227 19:08:00.405231 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85bbb\" (UniqueName: \"kubernetes.io/projected/5437f2b0-3e2f-434e-a1b2-a152345065a5-kube-api-access-85bbb\") pod \"auto-csr-approver-29536988-bmwsv\" (UID: \"5437f2b0-3e2f-434e-a1b2-a152345065a5\") " pod="openshift-infra/auto-csr-approver-29536988-bmwsv" Feb 27 19:08:00 crc kubenswrapper[4981]: I0227 19:08:00.478906 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536988-bmwsv" Feb 27 19:08:00 crc kubenswrapper[4981]: I0227 19:08:00.673498 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536988-bmwsv"] Feb 27 19:08:00 crc kubenswrapper[4981]: I0227 19:08:00.860137 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 27 19:08:01 crc kubenswrapper[4981]: I0227 19:08:01.091667 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerStarted","Data":"aef3afc70ca34265bb191fc692e1a0d2b393d895b5f2e44bc8c7d999f7ccce8e"} Feb 27 19:08:01 crc kubenswrapper[4981]: I0227 19:08:01.133466 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536988-bmwsv"] Feb 27 19:08:02 crc kubenswrapper[4981]: I0227 19:08:02.146005 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536988-bmwsv" event={"ID":"5437f2b0-3e2f-434e-a1b2-a152345065a5","Type":"ContainerStarted","Data":"634542eef5528f64db0350ec608398f80bb311323df4cb33220b73c9659d4fc6"} Feb 27 19:08:03 crc kubenswrapper[4981]: I0227 19:08:03.163571 4981 generic.go:334] "Generic (PLEG): container finished" podID="fcfd5f62-e6b9-4a63-8030-df81c9d7b580" containerID="52292a3a2c1906d86541be54cde391b3e2ea44195dfca89da85c7b92391c2d63" exitCode=0 Feb 27 19:08:03 crc kubenswrapper[4981]: I0227 19:08:03.163647 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j6c5h" event={"ID":"fcfd5f62-e6b9-4a63-8030-df81c9d7b580","Type":"ContainerDied","Data":"52292a3a2c1906d86541be54cde391b3e2ea44195dfca89da85c7b92391c2d63"} Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.618771 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.723580 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-swiftconf\") pod \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.723667 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-scripts\") pod \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.723710 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-ring-data-devices\") pod \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.723783 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-combined-ca-bundle\") pod \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.723850 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-etc-swift\") pod \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.723895 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-dispersionconf\") pod \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.723919 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndqc2\" (UniqueName: \"kubernetes.io/projected/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-kube-api-access-ndqc2\") pod \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\" (UID: \"fcfd5f62-e6b9-4a63-8030-df81c9d7b580\") " Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.750030 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "fcfd5f62-e6b9-4a63-8030-df81c9d7b580" (UID: "fcfd5f62-e6b9-4a63-8030-df81c9d7b580"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.750439 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "fcfd5f62-e6b9-4a63-8030-df81c9d7b580" (UID: "fcfd5f62-e6b9-4a63-8030-df81c9d7b580"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.825661 4981 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.825690 4981 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.886834 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-kube-api-access-ndqc2" (OuterVolumeSpecName: "kube-api-access-ndqc2") pod "fcfd5f62-e6b9-4a63-8030-df81c9d7b580" (UID: "fcfd5f62-e6b9-4a63-8030-df81c9d7b580"). InnerVolumeSpecName "kube-api-access-ndqc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.888074 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-scripts" (OuterVolumeSpecName: "scripts") pod "fcfd5f62-e6b9-4a63-8030-df81c9d7b580" (UID: "fcfd5f62-e6b9-4a63-8030-df81c9d7b580"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.890832 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "fcfd5f62-e6b9-4a63-8030-df81c9d7b580" (UID: "fcfd5f62-e6b9-4a63-8030-df81c9d7b580"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.891069 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcfd5f62-e6b9-4a63-8030-df81c9d7b580" (UID: "fcfd5f62-e6b9-4a63-8030-df81c9d7b580"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.909771 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "fcfd5f62-e6b9-4a63-8030-df81c9d7b580" (UID: "fcfd5f62-e6b9-4a63-8030-df81c9d7b580"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.928480 4981 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.928515 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndqc2\" (UniqueName: \"kubernetes.io/projected/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-kube-api-access-ndqc2\") on node \"crc\" DevicePath \"\"" Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.928528 4981 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.928537 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:08:04 crc kubenswrapper[4981]: I0227 19:08:04.928546 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcfd5f62-e6b9-4a63-8030-df81c9d7b580-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:08:05 crc kubenswrapper[4981]: I0227 19:08:05.225633 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j6c5h" event={"ID":"fcfd5f62-e6b9-4a63-8030-df81c9d7b580","Type":"ContainerDied","Data":"c0d99c1986a219e16631880a870ecb4fbf207ee88710aae7aad2c8d83877dade"} Feb 27 19:08:05 crc kubenswrapper[4981]: I0227 19:08:05.225976 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0d99c1986a219e16631880a870ecb4fbf207ee88710aae7aad2c8d83877dade" Feb 27 19:08:05 crc kubenswrapper[4981]: I0227 19:08:05.226033 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j6c5h" Feb 27 19:08:08 crc kubenswrapper[4981]: I0227 19:08:08.339670 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 27 19:08:09 crc kubenswrapper[4981]: I0227 19:08:09.571592 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 27 19:08:18 crc kubenswrapper[4981]: I0227 19:08:18.339744 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 27 19:08:19 crc kubenswrapper[4981]: I0227 19:08:19.242729 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 27 19:08:20 crc kubenswrapper[4981]: I0227 19:08:20.248806 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:08:20 crc kubenswrapper[4981]: I0227 19:08:20.249103 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:08:20 crc kubenswrapper[4981]: I0227 19:08:20.249153 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 19:08:20 crc kubenswrapper[4981]: I0227 19:08:20.249789 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"03bdafd14e1d7a2332dfab716d224757c23e9832e5c4bc0ebaf94e7e0e277e07"} pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:08:20 crc kubenswrapper[4981]: I0227 19:08:20.249841 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" containerID="cri-o://03bdafd14e1d7a2332dfab716d224757c23e9832e5c4bc0ebaf94e7e0e277e07" gracePeriod=600 Feb 27 19:08:27 crc kubenswrapper[4981]: I0227 19:08:27.210335 4981 generic.go:334] "Generic (PLEG): container finished" podID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerID="03bdafd14e1d7a2332dfab716d224757c23e9832e5c4bc0ebaf94e7e0e277e07" exitCode=0 Feb 27 19:08:27 crc kubenswrapper[4981]: I0227 19:08:27.210443 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerDied","Data":"03bdafd14e1d7a2332dfab716d224757c23e9832e5c4bc0ebaf94e7e0e277e07"} Feb 27 19:08:27 crc kubenswrapper[4981]: I0227 19:08:27.211160 4981 scope.go:117] "RemoveContainer" containerID="295fa1abf26d7f71e7264b907ce20f7606d63942d5385b64cf4bd1f2c3c45c16" Feb 27 19:08:28 crc kubenswrapper[4981]: I0227 19:08:28.340118 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 27 19:08:29 crc kubenswrapper[4981]: I0227 19:08:29.243291 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 27 19:08:38 crc kubenswrapper[4981]: I0227 19:08:38.339533 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 27 19:08:39 crc kubenswrapper[4981]: I0227 19:08:39.243952 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 27 19:08:48 crc kubenswrapper[4981]: I0227 19:08:48.339621 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 27 19:08:49 crc kubenswrapper[4981]: I0227 19:08:49.260850 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 27 19:08:51 crc kubenswrapper[4981]: I0227 19:08:51.634540 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 27 19:08:53 crc kubenswrapper[4981]: I0227 19:08:53.630064 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 27 19:08:58 crc kubenswrapper[4981]: I0227 19:08:58.506840 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 27 19:08:59 crc kubenswrapper[4981]: I0227 19:08:59.245704 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 27 19:09:08 crc kubenswrapper[4981]: I0227 19:09:08.338719 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 27 19:09:09 crc kubenswrapper[4981]: I0227 19:09:09.793829 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 27 19:09:18 crc kubenswrapper[4981]: I0227 19:09:18.339041 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 27 19:09:18 crc kubenswrapper[4981]: E0227 19:09:18.547871 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 27 19:09:18 crc kubenswrapper[4981]: E0227 19:09:18.548436 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cfkv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-tgsfm_openstack(81024b85-8686-478d-b17e-7c599561675b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:09:18 crc kubenswrapper[4981]: E0227 19:09:18.549672 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-tgsfm" podUID="81024b85-8686-478d-b17e-7c599561675b" Feb 27 19:09:18 crc kubenswrapper[4981]: E0227 19:09:18.892471 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-tgsfm" podUID="81024b85-8686-478d-b17e-7c599561675b" Feb 27 19:09:19 crc kubenswrapper[4981]: I0227 19:09:19.243036 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 27 19:09:20 crc kubenswrapper[4981]: E0227 19:09:20.183487 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-swift-account:current-podified" Feb 27 19:09:20 crc kubenswrapper[4981]: E0227 19:09:20.184155 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:account-server,Image:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,Command:[/usr/bin/swift-account-server /etc/swift/account-server.conf.d -v],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:account,HostPort:0,ContainerPort:6202,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n566h79hfh7h7dh67h57dhc7hd5h594h67dh58h84h59dh59h557h57bh5bchffh85h5h5f6h86h67dh549h55bh6ch9chf7hbbhf8h56dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:swift,ReadOnly:false,MountPath:/srv/node/pv,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-swift,ReadOnly:false,MountPath:/etc/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cache,ReadOnly:false,MountPath:/var/cache/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lock,ReadOnly:false,MountPath:/var/lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hjg8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42445,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-storage-0_openstack(c9c5bb1a-80fb-459f-acb9-e3751c60f684): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:09:20 crc kubenswrapper[4981]: I0227 19:09:20.925149 4981 generic.go:334] "Generic (PLEG): container finished" podID="5437f2b0-3e2f-434e-a1b2-a152345065a5" containerID="ef59041208e1d70019ae21b38567b2c6f87762c9e6b73a7b36517b78f750ad6a" exitCode=0 Feb 27 19:09:20 crc kubenswrapper[4981]: I0227 19:09:20.925308 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536988-bmwsv" event={"ID":"5437f2b0-3e2f-434e-a1b2-a152345065a5","Type":"ContainerDied","Data":"ef59041208e1d70019ae21b38567b2c6f87762c9e6b73a7b36517b78f750ad6a"} Feb 27 19:09:20 crc kubenswrapper[4981]: I0227 19:09:20.933014 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerStarted","Data":"d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba"} Feb 27 19:09:23 crc kubenswrapper[4981]: I0227 19:09:23.962562 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536988-bmwsv" event={"ID":"5437f2b0-3e2f-434e-a1b2-a152345065a5","Type":"ContainerDied","Data":"634542eef5528f64db0350ec608398f80bb311323df4cb33220b73c9659d4fc6"} Feb 27 19:09:23 crc kubenswrapper[4981]: I0227 19:09:23.963218 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="634542eef5528f64db0350ec608398f80bb311323df4cb33220b73c9659d4fc6" Feb 27 19:09:24 crc kubenswrapper[4981]: I0227 19:09:24.000555 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536988-bmwsv" Feb 27 19:09:24 crc kubenswrapper[4981]: I0227 19:09:24.107457 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85bbb\" (UniqueName: \"kubernetes.io/projected/5437f2b0-3e2f-434e-a1b2-a152345065a5-kube-api-access-85bbb\") pod \"5437f2b0-3e2f-434e-a1b2-a152345065a5\" (UID: \"5437f2b0-3e2f-434e-a1b2-a152345065a5\") " Feb 27 19:09:24 crc kubenswrapper[4981]: I0227 19:09:24.113861 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5437f2b0-3e2f-434e-a1b2-a152345065a5-kube-api-access-85bbb" (OuterVolumeSpecName: "kube-api-access-85bbb") pod "5437f2b0-3e2f-434e-a1b2-a152345065a5" (UID: "5437f2b0-3e2f-434e-a1b2-a152345065a5"). InnerVolumeSpecName "kube-api-access-85bbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:09:24 crc kubenswrapper[4981]: I0227 19:09:24.209326 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85bbb\" (UniqueName: \"kubernetes.io/projected/5437f2b0-3e2f-434e-a1b2-a152345065a5-kube-api-access-85bbb\") on node \"crc\" DevicePath \"\"" Feb 27 19:09:24 crc kubenswrapper[4981]: I0227 19:09:24.974212 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536988-bmwsv" Feb 27 19:09:25 crc kubenswrapper[4981]: I0227 19:09:25.084070 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536982-b7sr9"] Feb 27 19:09:25 crc kubenswrapper[4981]: I0227 19:09:25.093848 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536982-b7sr9"] Feb 27 19:09:25 crc kubenswrapper[4981]: I0227 19:09:25.644652 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c55cef26-7bd2-40d6-94b3-d2103eb1def6" path="/var/lib/kubelet/pods/c55cef26-7bd2-40d6-94b3-d2103eb1def6/volumes" Feb 27 19:09:28 crc kubenswrapper[4981]: I0227 19:09:28.005072 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerStarted","Data":"e0eca54f11d429374a0eee69171647db11c1192aa00c288bd9e67f3a6f0c0246"} Feb 27 19:09:28 crc kubenswrapper[4981]: I0227 19:09:28.342563 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:09:29 crc kubenswrapper[4981]: I0227 19:09:29.044391 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerStarted","Data":"bd2133012f7ec8d5b23febc4eae98775150d6779cece3953959dd0ebaafac076"} Feb 27 19:09:29 crc kubenswrapper[4981]: I0227 19:09:29.044779 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerStarted","Data":"9a1a2e131f5761d079c69185c95e394bd577eda00ea0354161ac5ab992f9e3d0"} Feb 27 19:09:29 crc kubenswrapper[4981]: I0227 19:09:29.244307 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.059185 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerStarted","Data":"38569ba465d1fbcc944576c382365600b3972a77a5d42e3a33726b72c23be51a"} Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.353073 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-2srwk"] Feb 27 19:09:30 crc kubenswrapper[4981]: E0227 19:09:30.353382 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcfd5f62-e6b9-4a63-8030-df81c9d7b580" containerName="swift-ring-rebalance" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.353397 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcfd5f62-e6b9-4a63-8030-df81c9d7b580" containerName="swift-ring-rebalance" Feb 27 19:09:30 crc kubenswrapper[4981]: E0227 19:09:30.353417 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5437f2b0-3e2f-434e-a1b2-a152345065a5" containerName="oc" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.353423 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5437f2b0-3e2f-434e-a1b2-a152345065a5" containerName="oc" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.353583 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="5437f2b0-3e2f-434e-a1b2-a152345065a5" containerName="oc" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.353615 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcfd5f62-e6b9-4a63-8030-df81c9d7b580" containerName="swift-ring-rebalance" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.354126 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2srwk" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.362308 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2srwk"] Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.439915 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-spmns"] Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.440889 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-spmns" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.447127 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-cb28-account-create-update-wm6rr"] Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.448007 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cb28-account-create-update-wm6rr" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.449854 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.452007 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-spmns"] Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.460316 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b46d876a-df60-46ef-a33d-6f2ddb4261f6-operator-scripts\") pod \"cinder-db-create-2srwk\" (UID: \"b46d876a-df60-46ef-a33d-6f2ddb4261f6\") " pod="openstack/cinder-db-create-2srwk" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.460400 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfnkl\" (UniqueName: \"kubernetes.io/projected/b46d876a-df60-46ef-a33d-6f2ddb4261f6-kube-api-access-kfnkl\") pod \"cinder-db-create-2srwk\" (UID: \"b46d876a-df60-46ef-a33d-6f2ddb4261f6\") " pod="openstack/cinder-db-create-2srwk" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.481764 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cb28-account-create-update-wm6rr"] Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.562273 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dd430a2-0c5e-4acc-9123-6bee2f09aa67-operator-scripts\") pod \"barbican-db-create-spmns\" (UID: \"1dd430a2-0c5e-4acc-9123-6bee2f09aa67\") " pod="openstack/barbican-db-create-spmns" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.562332 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b46d876a-df60-46ef-a33d-6f2ddb4261f6-operator-scripts\") pod \"cinder-db-create-2srwk\" (UID: \"b46d876a-df60-46ef-a33d-6f2ddb4261f6\") " pod="openstack/cinder-db-create-2srwk" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.562423 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfnkl\" (UniqueName: \"kubernetes.io/projected/b46d876a-df60-46ef-a33d-6f2ddb4261f6-kube-api-access-kfnkl\") pod \"cinder-db-create-2srwk\" (UID: \"b46d876a-df60-46ef-a33d-6f2ddb4261f6\") " pod="openstack/cinder-db-create-2srwk" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.562444 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzr8x\" (UniqueName: \"kubernetes.io/projected/1dd430a2-0c5e-4acc-9123-6bee2f09aa67-kube-api-access-nzr8x\") pod \"barbican-db-create-spmns\" (UID: \"1dd430a2-0c5e-4acc-9123-6bee2f09aa67\") " pod="openstack/barbican-db-create-spmns" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.562469 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq86n\" (UniqueName: \"kubernetes.io/projected/94eef5c5-d31c-4759-995e-ce36727018f1-kube-api-access-dq86n\") pod \"cinder-cb28-account-create-update-wm6rr\" (UID: \"94eef5c5-d31c-4759-995e-ce36727018f1\") " pod="openstack/cinder-cb28-account-create-update-wm6rr" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.562492 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94eef5c5-d31c-4759-995e-ce36727018f1-operator-scripts\") pod \"cinder-cb28-account-create-update-wm6rr\" (UID: \"94eef5c5-d31c-4759-995e-ce36727018f1\") " pod="openstack/cinder-cb28-account-create-update-wm6rr" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.563203 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b46d876a-df60-46ef-a33d-6f2ddb4261f6-operator-scripts\") pod \"cinder-db-create-2srwk\" (UID: \"b46d876a-df60-46ef-a33d-6f2ddb4261f6\") " pod="openstack/cinder-db-create-2srwk" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.580362 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfnkl\" (UniqueName: \"kubernetes.io/projected/b46d876a-df60-46ef-a33d-6f2ddb4261f6-kube-api-access-kfnkl\") pod \"cinder-db-create-2srwk\" (UID: \"b46d876a-df60-46ef-a33d-6f2ddb4261f6\") " pod="openstack/cinder-db-create-2srwk" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.653930 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-421d-account-create-update-pkhb5"] Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.655659 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-421d-account-create-update-pkhb5" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.657370 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.663871 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzr8x\" (UniqueName: \"kubernetes.io/projected/1dd430a2-0c5e-4acc-9123-6bee2f09aa67-kube-api-access-nzr8x\") pod \"barbican-db-create-spmns\" (UID: \"1dd430a2-0c5e-4acc-9123-6bee2f09aa67\") " pod="openstack/barbican-db-create-spmns" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.663920 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq86n\" (UniqueName: \"kubernetes.io/projected/94eef5c5-d31c-4759-995e-ce36727018f1-kube-api-access-dq86n\") pod \"cinder-cb28-account-create-update-wm6rr\" (UID: \"94eef5c5-d31c-4759-995e-ce36727018f1\") " pod="openstack/cinder-cb28-account-create-update-wm6rr" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.663947 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94eef5c5-d31c-4759-995e-ce36727018f1-operator-scripts\") pod \"cinder-cb28-account-create-update-wm6rr\" (UID: \"94eef5c5-d31c-4759-995e-ce36727018f1\") " pod="openstack/cinder-cb28-account-create-update-wm6rr" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.663997 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dd430a2-0c5e-4acc-9123-6bee2f09aa67-operator-scripts\") pod \"barbican-db-create-spmns\" (UID: \"1dd430a2-0c5e-4acc-9123-6bee2f09aa67\") " pod="openstack/barbican-db-create-spmns" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.664699 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dd430a2-0c5e-4acc-9123-6bee2f09aa67-operator-scripts\") pod \"barbican-db-create-spmns\" (UID: \"1dd430a2-0c5e-4acc-9123-6bee2f09aa67\") " pod="openstack/barbican-db-create-spmns" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.665183 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94eef5c5-d31c-4759-995e-ce36727018f1-operator-scripts\") pod \"cinder-cb28-account-create-update-wm6rr\" (UID: \"94eef5c5-d31c-4759-995e-ce36727018f1\") " pod="openstack/cinder-cb28-account-create-update-wm6rr" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.667791 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2srwk" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.676447 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-421d-account-create-update-pkhb5"] Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.698794 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq86n\" (UniqueName: \"kubernetes.io/projected/94eef5c5-d31c-4759-995e-ce36727018f1-kube-api-access-dq86n\") pod \"cinder-cb28-account-create-update-wm6rr\" (UID: \"94eef5c5-d31c-4759-995e-ce36727018f1\") " pod="openstack/cinder-cb28-account-create-update-wm6rr" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.704737 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzr8x\" (UniqueName: \"kubernetes.io/projected/1dd430a2-0c5e-4acc-9123-6bee2f09aa67-kube-api-access-nzr8x\") pod \"barbican-db-create-spmns\" (UID: \"1dd430a2-0c5e-4acc-9123-6bee2f09aa67\") " pod="openstack/barbican-db-create-spmns" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.710133 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-crjbt"] Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.711391 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-crjbt" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.715128 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.716219 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.716475 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.716668 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4zdz2" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.729364 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-crjbt"] Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.760794 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-spmns" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.764336 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cb28-account-create-update-wm6rr" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.765646 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44ff793-41da-4b74-b057-f4b3596eeb9d-operator-scripts\") pod \"barbican-421d-account-create-update-pkhb5\" (UID: \"c44ff793-41da-4b74-b057-f4b3596eeb9d\") " pod="openstack/barbican-421d-account-create-update-pkhb5" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.765745 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmx86\" (UniqueName: \"kubernetes.io/projected/c44ff793-41da-4b74-b057-f4b3596eeb9d-kube-api-access-wmx86\") pod \"barbican-421d-account-create-update-pkhb5\" (UID: \"c44ff793-41da-4b74-b057-f4b3596eeb9d\") " pod="openstack/barbican-421d-account-create-update-pkhb5" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.782134 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-m4gk2"] Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.783376 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m4gk2" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.801200 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-m4gk2"] Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.816793 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9ea4-account-create-update-ggv5q"] Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.818276 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9ea4-account-create-update-ggv5q" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.821661 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.826127 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9ea4-account-create-update-ggv5q"] Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.866734 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44ff793-41da-4b74-b057-f4b3596eeb9d-operator-scripts\") pod \"barbican-421d-account-create-update-pkhb5\" (UID: \"c44ff793-41da-4b74-b057-f4b3596eeb9d\") " pod="openstack/barbican-421d-account-create-update-pkhb5" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.866777 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dz49\" (UniqueName: \"kubernetes.io/projected/c7ebc81e-dae3-428f-9401-ddead1a42cec-kube-api-access-6dz49\") pod \"keystone-db-sync-crjbt\" (UID: \"c7ebc81e-dae3-428f-9401-ddead1a42cec\") " pod="openstack/keystone-db-sync-crjbt" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.866842 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67ba26f0-21ac-43b2-a954-3ab2b764cc7d-operator-scripts\") pod \"neutron-db-create-m4gk2\" (UID: \"67ba26f0-21ac-43b2-a954-3ab2b764cc7d\") " pod="openstack/neutron-db-create-m4gk2" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.866884 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ebc81e-dae3-428f-9401-ddead1a42cec-combined-ca-bundle\") pod \"keystone-db-sync-crjbt\" (UID: \"c7ebc81e-dae3-428f-9401-ddead1a42cec\") " pod="openstack/keystone-db-sync-crjbt" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.866925 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmx86\" (UniqueName: \"kubernetes.io/projected/c44ff793-41da-4b74-b057-f4b3596eeb9d-kube-api-access-wmx86\") pod \"barbican-421d-account-create-update-pkhb5\" (UID: \"c44ff793-41da-4b74-b057-f4b3596eeb9d\") " pod="openstack/barbican-421d-account-create-update-pkhb5" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.866951 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7ebc81e-dae3-428f-9401-ddead1a42cec-config-data\") pod \"keystone-db-sync-crjbt\" (UID: \"c7ebc81e-dae3-428f-9401-ddead1a42cec\") " pod="openstack/keystone-db-sync-crjbt" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.866994 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxmjf\" (UniqueName: \"kubernetes.io/projected/67ba26f0-21ac-43b2-a954-3ab2b764cc7d-kube-api-access-lxmjf\") pod \"neutron-db-create-m4gk2\" (UID: \"67ba26f0-21ac-43b2-a954-3ab2b764cc7d\") " pod="openstack/neutron-db-create-m4gk2" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.868553 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44ff793-41da-4b74-b057-f4b3596eeb9d-operator-scripts\") pod \"barbican-421d-account-create-update-pkhb5\" (UID: \"c44ff793-41da-4b74-b057-f4b3596eeb9d\") " pod="openstack/barbican-421d-account-create-update-pkhb5" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.894577 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmx86\" (UniqueName: \"kubernetes.io/projected/c44ff793-41da-4b74-b057-f4b3596eeb9d-kube-api-access-wmx86\") pod \"barbican-421d-account-create-update-pkhb5\" (UID: \"c44ff793-41da-4b74-b057-f4b3596eeb9d\") " pod="openstack/barbican-421d-account-create-update-pkhb5" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.968793 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dz49\" (UniqueName: \"kubernetes.io/projected/c7ebc81e-dae3-428f-9401-ddead1a42cec-kube-api-access-6dz49\") pod \"keystone-db-sync-crjbt\" (UID: \"c7ebc81e-dae3-428f-9401-ddead1a42cec\") " pod="openstack/keystone-db-sync-crjbt" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.968876 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67ba26f0-21ac-43b2-a954-3ab2b764cc7d-operator-scripts\") pod \"neutron-db-create-m4gk2\" (UID: \"67ba26f0-21ac-43b2-a954-3ab2b764cc7d\") " pod="openstack/neutron-db-create-m4gk2" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.968935 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwp9k\" (UniqueName: \"kubernetes.io/projected/6fed081d-f826-4383-b919-126d6a2aa92d-kube-api-access-zwp9k\") pod \"neutron-9ea4-account-create-update-ggv5q\" (UID: \"6fed081d-f826-4383-b919-126d6a2aa92d\") " pod="openstack/neutron-9ea4-account-create-update-ggv5q" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.968965 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ebc81e-dae3-428f-9401-ddead1a42cec-combined-ca-bundle\") pod \"keystone-db-sync-crjbt\" (UID: \"c7ebc81e-dae3-428f-9401-ddead1a42cec\") " pod="openstack/keystone-db-sync-crjbt" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.969028 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7ebc81e-dae3-428f-9401-ddead1a42cec-config-data\") pod \"keystone-db-sync-crjbt\" (UID: \"c7ebc81e-dae3-428f-9401-ddead1a42cec\") " pod="openstack/keystone-db-sync-crjbt" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.969093 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fed081d-f826-4383-b919-126d6a2aa92d-operator-scripts\") pod \"neutron-9ea4-account-create-update-ggv5q\" (UID: \"6fed081d-f826-4383-b919-126d6a2aa92d\") " pod="openstack/neutron-9ea4-account-create-update-ggv5q" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.969122 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxmjf\" (UniqueName: \"kubernetes.io/projected/67ba26f0-21ac-43b2-a954-3ab2b764cc7d-kube-api-access-lxmjf\") pod \"neutron-db-create-m4gk2\" (UID: \"67ba26f0-21ac-43b2-a954-3ab2b764cc7d\") " pod="openstack/neutron-db-create-m4gk2" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.969872 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67ba26f0-21ac-43b2-a954-3ab2b764cc7d-operator-scripts\") pod \"neutron-db-create-m4gk2\" (UID: \"67ba26f0-21ac-43b2-a954-3ab2b764cc7d\") " pod="openstack/neutron-db-create-m4gk2" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.972088 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7ebc81e-dae3-428f-9401-ddead1a42cec-config-data\") pod \"keystone-db-sync-crjbt\" (UID: \"c7ebc81e-dae3-428f-9401-ddead1a42cec\") " pod="openstack/keystone-db-sync-crjbt" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.973310 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ebc81e-dae3-428f-9401-ddead1a42cec-combined-ca-bundle\") pod \"keystone-db-sync-crjbt\" (UID: \"c7ebc81e-dae3-428f-9401-ddead1a42cec\") " pod="openstack/keystone-db-sync-crjbt" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.985977 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-421d-account-create-update-pkhb5" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.995531 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dz49\" (UniqueName: \"kubernetes.io/projected/c7ebc81e-dae3-428f-9401-ddead1a42cec-kube-api-access-6dz49\") pod \"keystone-db-sync-crjbt\" (UID: \"c7ebc81e-dae3-428f-9401-ddead1a42cec\") " pod="openstack/keystone-db-sync-crjbt" Feb 27 19:09:30 crc kubenswrapper[4981]: I0227 19:09:30.995758 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxmjf\" (UniqueName: \"kubernetes.io/projected/67ba26f0-21ac-43b2-a954-3ab2b764cc7d-kube-api-access-lxmjf\") pod \"neutron-db-create-m4gk2\" (UID: \"67ba26f0-21ac-43b2-a954-3ab2b764cc7d\") " pod="openstack/neutron-db-create-m4gk2" Feb 27 19:09:31 crc kubenswrapper[4981]: I0227 19:09:31.070485 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-crjbt" Feb 27 19:09:31 crc kubenswrapper[4981]: I0227 19:09:31.071108 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwp9k\" (UniqueName: \"kubernetes.io/projected/6fed081d-f826-4383-b919-126d6a2aa92d-kube-api-access-zwp9k\") pod \"neutron-9ea4-account-create-update-ggv5q\" (UID: \"6fed081d-f826-4383-b919-126d6a2aa92d\") " pod="openstack/neutron-9ea4-account-create-update-ggv5q" Feb 27 19:09:31 crc kubenswrapper[4981]: I0227 19:09:31.071210 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fed081d-f826-4383-b919-126d6a2aa92d-operator-scripts\") pod \"neutron-9ea4-account-create-update-ggv5q\" (UID: \"6fed081d-f826-4383-b919-126d6a2aa92d\") " pod="openstack/neutron-9ea4-account-create-update-ggv5q" Feb 27 19:09:31 crc kubenswrapper[4981]: I0227 19:09:31.072331 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fed081d-f826-4383-b919-126d6a2aa92d-operator-scripts\") pod \"neutron-9ea4-account-create-update-ggv5q\" (UID: \"6fed081d-f826-4383-b919-126d6a2aa92d\") " pod="openstack/neutron-9ea4-account-create-update-ggv5q" Feb 27 19:09:31 crc kubenswrapper[4981]: I0227 19:09:31.090126 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwp9k\" (UniqueName: \"kubernetes.io/projected/6fed081d-f826-4383-b919-126d6a2aa92d-kube-api-access-zwp9k\") pod \"neutron-9ea4-account-create-update-ggv5q\" (UID: \"6fed081d-f826-4383-b919-126d6a2aa92d\") " pod="openstack/neutron-9ea4-account-create-update-ggv5q" Feb 27 19:09:31 crc kubenswrapper[4981]: I0227 19:09:31.112184 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m4gk2" Feb 27 19:09:31 crc kubenswrapper[4981]: I0227 19:09:31.136873 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9ea4-account-create-update-ggv5q" Feb 27 19:09:31 crc kubenswrapper[4981]: I0227 19:09:31.152550 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2srwk"] Feb 27 19:09:33 crc kubenswrapper[4981]: W0227 19:09:33.424194 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb46d876a_df60_46ef_a33d_6f2ddb4261f6.slice/crio-7812d84de50391dbc70ea448e2675f2565c650f02cfa775cafc8628edcd4b83b WatchSource:0}: Error finding container 7812d84de50391dbc70ea448e2675f2565c650f02cfa775cafc8628edcd4b83b: Status 404 returned error can't find the container with id 7812d84de50391dbc70ea448e2675f2565c650f02cfa775cafc8628edcd4b83b Feb 27 19:09:34 crc kubenswrapper[4981]: I0227 19:09:34.103522 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2srwk" event={"ID":"b46d876a-df60-46ef-a33d-6f2ddb4261f6","Type":"ContainerStarted","Data":"7812d84de50391dbc70ea448e2675f2565c650f02cfa775cafc8628edcd4b83b"} Feb 27 19:09:35 crc kubenswrapper[4981]: I0227 19:09:35.055118 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-421d-account-create-update-pkhb5"] Feb 27 19:09:35 crc kubenswrapper[4981]: I0227 19:09:35.064975 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-m4gk2"] Feb 27 19:09:35 crc kubenswrapper[4981]: I0227 19:09:35.075817 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9ea4-account-create-update-ggv5q"] Feb 27 19:09:35 crc kubenswrapper[4981]: W0227 19:09:35.079002 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fed081d_f826_4383_b919_126d6a2aa92d.slice/crio-3576b599df01ead447891be7b04c8cf1212ecd1da5263318d3c7817798529819 WatchSource:0}: Error finding container 3576b599df01ead447891be7b04c8cf1212ecd1da5263318d3c7817798529819: Status 404 returned error can't find the container with id 3576b599df01ead447891be7b04c8cf1212ecd1da5263318d3c7817798529819 Feb 27 19:09:35 crc kubenswrapper[4981]: W0227 19:09:35.088691 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67ba26f0_21ac_43b2_a954_3ab2b764cc7d.slice/crio-3c23c8057684f973d94fed046ca7f66c5a0bcbb4f95aaa6f698acefd5ec5c490 WatchSource:0}: Error finding container 3c23c8057684f973d94fed046ca7f66c5a0bcbb4f95aaa6f698acefd5ec5c490: Status 404 returned error can't find the container with id 3c23c8057684f973d94fed046ca7f66c5a0bcbb4f95aaa6f698acefd5ec5c490 Feb 27 19:09:35 crc kubenswrapper[4981]: I0227 19:09:35.126825 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9ea4-account-create-update-ggv5q" event={"ID":"6fed081d-f826-4383-b919-126d6a2aa92d","Type":"ContainerStarted","Data":"3576b599df01ead447891be7b04c8cf1212ecd1da5263318d3c7817798529819"} Feb 27 19:09:35 crc kubenswrapper[4981]: I0227 19:09:35.132662 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-m4gk2" event={"ID":"67ba26f0-21ac-43b2-a954-3ab2b764cc7d","Type":"ContainerStarted","Data":"3c23c8057684f973d94fed046ca7f66c5a0bcbb4f95aaa6f698acefd5ec5c490"} Feb 27 19:09:35 crc kubenswrapper[4981]: I0227 19:09:35.149226 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-421d-account-create-update-pkhb5" event={"ID":"c44ff793-41da-4b74-b057-f4b3596eeb9d","Type":"ContainerStarted","Data":"264bc32c96fe46e08427489d4b6fc3e9d0ec6b7689ce877e7993568413212ba2"} Feb 27 19:09:35 crc kubenswrapper[4981]: I0227 19:09:35.150850 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-spmns"] Feb 27 19:09:35 crc kubenswrapper[4981]: I0227 19:09:35.156660 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cb28-account-create-update-wm6rr"] Feb 27 19:09:35 crc kubenswrapper[4981]: I0227 19:09:35.162029 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-crjbt"] Feb 27 19:09:35 crc kubenswrapper[4981]: I0227 19:09:35.162894 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:09:35 crc kubenswrapper[4981]: W0227 19:09:35.160741 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7ebc81e_dae3_428f_9401_ddead1a42cec.slice/crio-7cda73fc3f05087b6fe8b0d757059a51ed75ecc0358eb075ebf1c1cb59abfdab WatchSource:0}: Error finding container 7cda73fc3f05087b6fe8b0d757059a51ed75ecc0358eb075ebf1c1cb59abfdab: Status 404 returned error can't find the container with id 7cda73fc3f05087b6fe8b0d757059a51ed75ecc0358eb075ebf1c1cb59abfdab Feb 27 19:09:35 crc kubenswrapper[4981]: W0227 19:09:35.170420 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94eef5c5_d31c_4759_995e_ce36727018f1.slice/crio-50215ca0b79454673ba5b99d17855b86cde44c625cdd319a79beedb9ec75da37 WatchSource:0}: Error finding container 50215ca0b79454673ba5b99d17855b86cde44c625cdd319a79beedb9ec75da37: Status 404 returned error can't find the container with id 50215ca0b79454673ba5b99d17855b86cde44c625cdd319a79beedb9ec75da37 Feb 27 19:09:36 crc kubenswrapper[4981]: I0227 19:09:36.166251 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2srwk" event={"ID":"b46d876a-df60-46ef-a33d-6f2ddb4261f6","Type":"ContainerStarted","Data":"a5d27a8ad6164dc28ebcc1185e3ce3c12770dba5e450d93a6006b8cf3e9f549a"} Feb 27 19:09:36 crc kubenswrapper[4981]: I0227 19:09:36.174268 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-crjbt" event={"ID":"c7ebc81e-dae3-428f-9401-ddead1a42cec","Type":"ContainerStarted","Data":"7cda73fc3f05087b6fe8b0d757059a51ed75ecc0358eb075ebf1c1cb59abfdab"} Feb 27 19:09:36 crc kubenswrapper[4981]: I0227 19:09:36.177427 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9ea4-account-create-update-ggv5q" event={"ID":"6fed081d-f826-4383-b919-126d6a2aa92d","Type":"ContainerStarted","Data":"531958d8e14e7d34b3f90789e5e2637a638062c6d362ab894c4bc534b9ce119c"} Feb 27 19:09:36 crc kubenswrapper[4981]: I0227 19:09:36.183546 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tgsfm" event={"ID":"81024b85-8686-478d-b17e-7c599561675b","Type":"ContainerStarted","Data":"5332cf3f5b64f2f3b52b937ec64b6dff04cfa7fdd73b7dc4e33fe5d2c008f675"} Feb 27 19:09:36 crc kubenswrapper[4981]: I0227 19:09:36.187440 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-spmns" event={"ID":"1dd430a2-0c5e-4acc-9123-6bee2f09aa67","Type":"ContainerStarted","Data":"ba25fc01cd3ba9d204a8832a68f3c221b9bf12a26dc747c0f8450c4251fb2747"} Feb 27 19:09:36 crc kubenswrapper[4981]: I0227 19:09:36.187519 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-spmns" event={"ID":"1dd430a2-0c5e-4acc-9123-6bee2f09aa67","Type":"ContainerStarted","Data":"38dd0b1a9c0d5c2588d81a1e9c3894b22e948952446434fe0f86996ac4de154c"} Feb 27 19:09:36 crc kubenswrapper[4981]: I0227 19:09:36.189698 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-2srwk" podStartSLOduration=6.189678312 podStartE2EDuration="6.189678312s" podCreationTimestamp="2026-02-27 19:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:09:36.180324623 +0000 UTC m=+1475.659105803" watchObservedRunningTime="2026-02-27 19:09:36.189678312 +0000 UTC m=+1475.668459492" Feb 27 19:09:36 crc kubenswrapper[4981]: I0227 19:09:36.190042 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-m4gk2" event={"ID":"67ba26f0-21ac-43b2-a954-3ab2b764cc7d","Type":"ContainerStarted","Data":"f332c9c4f6940a8270735e0201344c8a12b47b999c9c7ed16d5e9cbe6c3bf7c5"} Feb 27 19:09:36 crc kubenswrapper[4981]: I0227 19:09:36.196958 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-421d-account-create-update-pkhb5" event={"ID":"c44ff793-41da-4b74-b057-f4b3596eeb9d","Type":"ContainerStarted","Data":"43954ed89f1a5a50cab1e0763369b500e72de40de24f9ee7294e4748d6f94e76"} Feb 27 19:09:36 crc kubenswrapper[4981]: I0227 19:09:36.200024 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cb28-account-create-update-wm6rr" event={"ID":"94eef5c5-d31c-4759-995e-ce36727018f1","Type":"ContainerStarted","Data":"60ea99e8d510f6df63673bce3568154a3b9731d5509747db3b548c69fc6d391a"} Feb 27 19:09:36 crc kubenswrapper[4981]: I0227 19:09:36.200100 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cb28-account-create-update-wm6rr" event={"ID":"94eef5c5-d31c-4759-995e-ce36727018f1","Type":"ContainerStarted","Data":"50215ca0b79454673ba5b99d17855b86cde44c625cdd319a79beedb9ec75da37"} Feb 27 19:09:36 crc kubenswrapper[4981]: I0227 19:09:36.207030 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-9ea4-account-create-update-ggv5q" podStartSLOduration=6.206991716 podStartE2EDuration="6.206991716s" podCreationTimestamp="2026-02-27 19:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:09:36.196312797 +0000 UTC m=+1475.675093977" watchObservedRunningTime="2026-02-27 19:09:36.206991716 +0000 UTC m=+1475.685772886" Feb 27 19:09:36 crc kubenswrapper[4981]: I0227 19:09:36.226360 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-421d-account-create-update-pkhb5" podStartSLOduration=6.226335764 podStartE2EDuration="6.226335764s" podCreationTimestamp="2026-02-27 19:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:09:36.216133048 +0000 UTC m=+1475.694914248" watchObservedRunningTime="2026-02-27 19:09:36.226335764 +0000 UTC m=+1475.705116934" Feb 27 19:09:36 crc kubenswrapper[4981]: I0227 19:09:36.242406 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-m4gk2" podStartSLOduration=6.242383449 podStartE2EDuration="6.242383449s" podCreationTimestamp="2026-02-27 19:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:09:36.233208766 +0000 UTC m=+1475.711989936" watchObservedRunningTime="2026-02-27 19:09:36.242383449 +0000 UTC m=+1475.721164609" Feb 27 19:09:36 crc kubenswrapper[4981]: I0227 19:09:36.264367 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-tgsfm" podStartSLOduration=2.961772524 podStartE2EDuration="1m46.264340357s" podCreationTimestamp="2026-02-27 19:07:50 +0000 UTC" firstStartedPulling="2026-02-27 19:07:51.148438947 +0000 UTC m=+1370.627220107" lastFinishedPulling="2026-02-27 19:09:34.45100674 +0000 UTC m=+1473.929787940" observedRunningTime="2026-02-27 19:09:36.255007518 +0000 UTC m=+1475.733788718" watchObservedRunningTime="2026-02-27 19:09:36.264340357 +0000 UTC m=+1475.743121527" Feb 27 19:09:36 crc kubenswrapper[4981]: I0227 19:09:36.287029 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-spmns" podStartSLOduration=6.286985156 podStartE2EDuration="6.286985156s" podCreationTimestamp="2026-02-27 19:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:09:36.273151399 +0000 UTC m=+1475.751932589" watchObservedRunningTime="2026-02-27 19:09:36.286985156 +0000 UTC m=+1475.765766316" Feb 27 19:09:36 crc kubenswrapper[4981]: I0227 19:09:36.301631 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-cb28-account-create-update-wm6rr" podStartSLOduration=6.301605148 podStartE2EDuration="6.301605148s" podCreationTimestamp="2026-02-27 19:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:09:36.296488269 +0000 UTC m=+1475.775269449" watchObservedRunningTime="2026-02-27 19:09:36.301605148 +0000 UTC m=+1475.780386318" Feb 27 19:09:38 crc kubenswrapper[4981]: I0227 19:09:38.227132 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerStarted","Data":"bd23d8482fb237875074c0a92ce77c62ec21a9f35c2014202018bbdef7e20697"} Feb 27 19:09:38 crc kubenswrapper[4981]: I0227 19:09:38.228109 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerStarted","Data":"65f6f3c00e9667ac2dc2eaf62c9691a794f16c6916c044f6252dfe67b11c9cec"} Feb 27 19:09:39 crc kubenswrapper[4981]: I0227 19:09:39.244493 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerStarted","Data":"80c7a986c413669964ba2fa274f8997a3315fbfd2c8ff1d23dbd74c88b68e595"} Feb 27 19:10:01 crc kubenswrapper[4981]: I0227 19:09:50.369833 4981 generic.go:334] "Generic (PLEG): container finished" podID="b46d876a-df60-46ef-a33d-6f2ddb4261f6" containerID="a5d27a8ad6164dc28ebcc1185e3ce3c12770dba5e450d93a6006b8cf3e9f549a" exitCode=0 Feb 27 19:10:01 crc kubenswrapper[4981]: I0227 19:09:50.369927 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2srwk" event={"ID":"b46d876a-df60-46ef-a33d-6f2ddb4261f6","Type":"ContainerDied","Data":"a5d27a8ad6164dc28ebcc1185e3ce3c12770dba5e450d93a6006b8cf3e9f549a"} Feb 27 19:10:01 crc kubenswrapper[4981]: I0227 19:10:00.150526 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536990-lrfs7"] Feb 27 19:10:01 crc kubenswrapper[4981]: I0227 19:10:00.152271 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536990-lrfs7" Feb 27 19:10:01 crc kubenswrapper[4981]: I0227 19:10:00.156524 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:10:01 crc kubenswrapper[4981]: I0227 19:10:00.156747 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:10:01 crc kubenswrapper[4981]: I0227 19:10:00.156946 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:10:01 crc kubenswrapper[4981]: I0227 19:10:00.163259 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536990-lrfs7"] Feb 27 19:10:01 crc kubenswrapper[4981]: I0227 19:10:00.200632 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbprs\" (UniqueName: \"kubernetes.io/projected/eac9845d-e69d-4927-93c6-ec79af3de438-kube-api-access-sbprs\") pod \"auto-csr-approver-29536990-lrfs7\" (UID: \"eac9845d-e69d-4927-93c6-ec79af3de438\") " pod="openshift-infra/auto-csr-approver-29536990-lrfs7" Feb 27 19:10:01 crc kubenswrapper[4981]: I0227 19:10:00.302373 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbprs\" (UniqueName: \"kubernetes.io/projected/eac9845d-e69d-4927-93c6-ec79af3de438-kube-api-access-sbprs\") pod \"auto-csr-approver-29536990-lrfs7\" (UID: \"eac9845d-e69d-4927-93c6-ec79af3de438\") " pod="openshift-infra/auto-csr-approver-29536990-lrfs7" Feb 27 19:10:01 crc kubenswrapper[4981]: I0227 19:10:00.332274 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbprs\" (UniqueName: \"kubernetes.io/projected/eac9845d-e69d-4927-93c6-ec79af3de438-kube-api-access-sbprs\") pod \"auto-csr-approver-29536990-lrfs7\" (UID: \"eac9845d-e69d-4927-93c6-ec79af3de438\") " pod="openshift-infra/auto-csr-approver-29536990-lrfs7" Feb 27 19:10:01 crc kubenswrapper[4981]: I0227 19:10:00.478986 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536990-lrfs7" Feb 27 19:10:02 crc kubenswrapper[4981]: I0227 19:10:02.932478 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-jfmbn" podUID="24537f79-2aa5-4ba1-afc0-e91183569040" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 19:10:09 crc kubenswrapper[4981]: I0227 19:10:09.811829 4981 scope.go:117] "RemoveContainer" containerID="80437da3b8b7cf5b159153433d72b1f3efb261c96c7662ff55d71b5d465af809" Feb 27 19:10:11 crc kubenswrapper[4981]: I0227 19:10:11.425765 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2srwk" Feb 27 19:10:11 crc kubenswrapper[4981]: I0227 19:10:11.541780 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfnkl\" (UniqueName: \"kubernetes.io/projected/b46d876a-df60-46ef-a33d-6f2ddb4261f6-kube-api-access-kfnkl\") pod \"b46d876a-df60-46ef-a33d-6f2ddb4261f6\" (UID: \"b46d876a-df60-46ef-a33d-6f2ddb4261f6\") " Feb 27 19:10:11 crc kubenswrapper[4981]: I0227 19:10:11.542190 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b46d876a-df60-46ef-a33d-6f2ddb4261f6-operator-scripts\") pod \"b46d876a-df60-46ef-a33d-6f2ddb4261f6\" (UID: \"b46d876a-df60-46ef-a33d-6f2ddb4261f6\") " Feb 27 19:10:11 crc kubenswrapper[4981]: I0227 19:10:11.543244 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b46d876a-df60-46ef-a33d-6f2ddb4261f6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b46d876a-df60-46ef-a33d-6f2ddb4261f6" (UID: "b46d876a-df60-46ef-a33d-6f2ddb4261f6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:10:11 crc kubenswrapper[4981]: I0227 19:10:11.547146 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b46d876a-df60-46ef-a33d-6f2ddb4261f6-kube-api-access-kfnkl" (OuterVolumeSpecName: "kube-api-access-kfnkl") pod "b46d876a-df60-46ef-a33d-6f2ddb4261f6" (UID: "b46d876a-df60-46ef-a33d-6f2ddb4261f6"). InnerVolumeSpecName "kube-api-access-kfnkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:10:11 crc kubenswrapper[4981]: I0227 19:10:11.597394 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2srwk" event={"ID":"b46d876a-df60-46ef-a33d-6f2ddb4261f6","Type":"ContainerDied","Data":"7812d84de50391dbc70ea448e2675f2565c650f02cfa775cafc8628edcd4b83b"} Feb 27 19:10:11 crc kubenswrapper[4981]: I0227 19:10:11.597485 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7812d84de50391dbc70ea448e2675f2565c650f02cfa775cafc8628edcd4b83b" Feb 27 19:10:11 crc kubenswrapper[4981]: I0227 19:10:11.597517 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2srwk" Feb 27 19:10:11 crc kubenswrapper[4981]: I0227 19:10:11.643988 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfnkl\" (UniqueName: \"kubernetes.io/projected/b46d876a-df60-46ef-a33d-6f2ddb4261f6-kube-api-access-kfnkl\") on node \"crc\" DevicePath \"\"" Feb 27 19:10:11 crc kubenswrapper[4981]: I0227 19:10:11.644383 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b46d876a-df60-46ef-a33d-6f2ddb4261f6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:10:11 crc kubenswrapper[4981]: W0227 19:10:11.823038 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeac9845d_e69d_4927_93c6_ec79af3de438.slice/crio-c3c3960bfca3141062eb7fa540120f01dba357230883ea1acd9f80cf3cdd4380 WatchSource:0}: Error finding container c3c3960bfca3141062eb7fa540120f01dba357230883ea1acd9f80cf3cdd4380: Status 404 returned error can't find the container with id c3c3960bfca3141062eb7fa540120f01dba357230883ea1acd9f80cf3cdd4380 Feb 27 19:10:11 crc kubenswrapper[4981]: I0227 19:10:11.823521 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536990-lrfs7"] Feb 27 19:10:12 crc kubenswrapper[4981]: I0227 19:10:12.606967 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536990-lrfs7" event={"ID":"eac9845d-e69d-4927-93c6-ec79af3de438","Type":"ContainerStarted","Data":"c3c3960bfca3141062eb7fa540120f01dba357230883ea1acd9f80cf3cdd4380"} Feb 27 19:10:13 crc kubenswrapper[4981]: I0227 19:10:13.653411 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerStarted","Data":"7dfea33b75db73391310211c5e0efd16be4a0053864fa6f4abfd9bc77f7118f0"} Feb 27 19:10:17 crc kubenswrapper[4981]: E0227 19:10:17.423376 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Feb 27 19:10:17 crc kubenswrapper[4981]: E0227 19:10:17.424468 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6dz49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-crjbt_openstack(c7ebc81e-dae3-428f-9401-ddead1a42cec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:10:17 crc kubenswrapper[4981]: E0227 19:10:17.425745 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-crjbt" podUID="c7ebc81e-dae3-428f-9401-ddead1a42cec" Feb 27 19:10:17 crc kubenswrapper[4981]: E0227 19:10:17.731736 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-crjbt" podUID="c7ebc81e-dae3-428f-9401-ddead1a42cec" Feb 27 19:10:18 crc kubenswrapper[4981]: I0227 19:10:18.745678 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerStarted","Data":"6429fdd1fd1cd3788a688757b026c7af8c055f3fb7254d239ca6600f69c3448f"} Feb 27 19:10:21 crc kubenswrapper[4981]: I0227 19:10:21.776237 4981 generic.go:334] "Generic (PLEG): container finished" podID="67ba26f0-21ac-43b2-a954-3ab2b764cc7d" containerID="f332c9c4f6940a8270735e0201344c8a12b47b999c9c7ed16d5e9cbe6c3bf7c5" exitCode=0 Feb 27 19:10:21 crc kubenswrapper[4981]: I0227 19:10:21.776312 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-m4gk2" event={"ID":"67ba26f0-21ac-43b2-a954-3ab2b764cc7d","Type":"ContainerDied","Data":"f332c9c4f6940a8270735e0201344c8a12b47b999c9c7ed16d5e9cbe6c3bf7c5"} Feb 27 19:10:23 crc kubenswrapper[4981]: I0227 19:10:23.148360 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m4gk2" Feb 27 19:10:23 crc kubenswrapper[4981]: I0227 19:10:23.238700 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxmjf\" (UniqueName: \"kubernetes.io/projected/67ba26f0-21ac-43b2-a954-3ab2b764cc7d-kube-api-access-lxmjf\") pod \"67ba26f0-21ac-43b2-a954-3ab2b764cc7d\" (UID: \"67ba26f0-21ac-43b2-a954-3ab2b764cc7d\") " Feb 27 19:10:23 crc kubenswrapper[4981]: I0227 19:10:23.238744 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67ba26f0-21ac-43b2-a954-3ab2b764cc7d-operator-scripts\") pod \"67ba26f0-21ac-43b2-a954-3ab2b764cc7d\" (UID: \"67ba26f0-21ac-43b2-a954-3ab2b764cc7d\") " Feb 27 19:10:23 crc kubenswrapper[4981]: I0227 19:10:23.239441 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67ba26f0-21ac-43b2-a954-3ab2b764cc7d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67ba26f0-21ac-43b2-a954-3ab2b764cc7d" (UID: "67ba26f0-21ac-43b2-a954-3ab2b764cc7d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:10:23 crc kubenswrapper[4981]: I0227 19:10:23.245008 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ba26f0-21ac-43b2-a954-3ab2b764cc7d-kube-api-access-lxmjf" (OuterVolumeSpecName: "kube-api-access-lxmjf") pod "67ba26f0-21ac-43b2-a954-3ab2b764cc7d" (UID: "67ba26f0-21ac-43b2-a954-3ab2b764cc7d"). InnerVolumeSpecName "kube-api-access-lxmjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:10:23 crc kubenswrapper[4981]: I0227 19:10:23.340805 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxmjf\" (UniqueName: \"kubernetes.io/projected/67ba26f0-21ac-43b2-a954-3ab2b764cc7d-kube-api-access-lxmjf\") on node \"crc\" DevicePath \"\"" Feb 27 19:10:23 crc kubenswrapper[4981]: I0227 19:10:23.340853 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67ba26f0-21ac-43b2-a954-3ab2b764cc7d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:10:23 crc kubenswrapper[4981]: I0227 19:10:23.794684 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-m4gk2" event={"ID":"67ba26f0-21ac-43b2-a954-3ab2b764cc7d","Type":"ContainerDied","Data":"3c23c8057684f973d94fed046ca7f66c5a0bcbb4f95aaa6f698acefd5ec5c490"} Feb 27 19:10:23 crc kubenswrapper[4981]: I0227 19:10:23.794732 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c23c8057684f973d94fed046ca7f66c5a0bcbb4f95aaa6f698acefd5ec5c490" Feb 27 19:10:23 crc kubenswrapper[4981]: I0227 19:10:23.794734 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m4gk2" Feb 27 19:10:25 crc kubenswrapper[4981]: I0227 19:10:25.454792 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536990-lrfs7" event={"ID":"eac9845d-e69d-4927-93c6-ec79af3de438","Type":"ContainerStarted","Data":"46f07e59ffd023160b5b0f28c57bbb05710a1032f129bd2a26938cf25a90cd4e"} Feb 27 19:10:25 crc kubenswrapper[4981]: I0227 19:10:25.457954 4981 generic.go:334] "Generic (PLEG): container finished" podID="1dd430a2-0c5e-4acc-9123-6bee2f09aa67" containerID="ba25fc01cd3ba9d204a8832a68f3c221b9bf12a26dc747c0f8450c4251fb2747" exitCode=0 Feb 27 19:10:25 crc kubenswrapper[4981]: I0227 19:10:25.457999 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-spmns" event={"ID":"1dd430a2-0c5e-4acc-9123-6bee2f09aa67","Type":"ContainerDied","Data":"ba25fc01cd3ba9d204a8832a68f3c221b9bf12a26dc747c0f8450c4251fb2747"} Feb 27 19:10:25 crc kubenswrapper[4981]: I0227 19:10:25.462098 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerStarted","Data":"ee08f1be3428c964e3a5c4747f6aa00160451c72e3665c691697f802f5a0bff8"} Feb 27 19:10:25 crc kubenswrapper[4981]: I0227 19:10:25.490651 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536990-lrfs7" podStartSLOduration=14.308982533 podStartE2EDuration="25.490632055s" podCreationTimestamp="2026-02-27 19:10:00 +0000 UTC" firstStartedPulling="2026-02-27 19:10:11.826890933 +0000 UTC m=+1511.305672103" lastFinishedPulling="2026-02-27 19:10:23.008540455 +0000 UTC m=+1522.487321625" observedRunningTime="2026-02-27 19:10:25.471227216 +0000 UTC m=+1524.950008366" watchObservedRunningTime="2026-02-27 19:10:25.490632055 +0000 UTC m=+1524.969413205" Feb 27 19:10:26 crc kubenswrapper[4981]: E0227 19:10:26.320831 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"account-replicator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-account:current-podified\\\"\", failed to \"StartContainer\" for \"account-auditor\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-account:current-podified\\\"\", failed to \"StartContainer\" for \"account-reaper\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-account:current-podified\\\"\"]" pod="openstack/swift-storage-0" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" Feb 27 19:10:27 crc kubenswrapper[4981]: I0227 19:10:27.702011 4981 generic.go:334] "Generic (PLEG): container finished" podID="eac9845d-e69d-4927-93c6-ec79af3de438" containerID="46f07e59ffd023160b5b0f28c57bbb05710a1032f129bd2a26938cf25a90cd4e" exitCode=0 Feb 27 19:10:27 crc kubenswrapper[4981]: I0227 19:10:27.702535 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536990-lrfs7" event={"ID":"eac9845d-e69d-4927-93c6-ec79af3de438","Type":"ContainerDied","Data":"46f07e59ffd023160b5b0f28c57bbb05710a1032f129bd2a26938cf25a90cd4e"} Feb 27 19:10:28 crc kubenswrapper[4981]: I0227 19:10:28.383619 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerStarted","Data":"f0a4445a2b6fa3cf8145c61803d537465f991247ac86d8c79a5cbc0036d344fa"} Feb 27 19:10:28 crc kubenswrapper[4981]: I0227 19:10:28.727709 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-spmns" Feb 27 19:10:29 crc kubenswrapper[4981]: I0227 19:10:29.285272 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dd430a2-0c5e-4acc-9123-6bee2f09aa67-operator-scripts\") pod \"1dd430a2-0c5e-4acc-9123-6bee2f09aa67\" (UID: \"1dd430a2-0c5e-4acc-9123-6bee2f09aa67\") " Feb 27 19:10:29 crc kubenswrapper[4981]: I0227 19:10:29.285404 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzr8x\" (UniqueName: \"kubernetes.io/projected/1dd430a2-0c5e-4acc-9123-6bee2f09aa67-kube-api-access-nzr8x\") pod \"1dd430a2-0c5e-4acc-9123-6bee2f09aa67\" (UID: \"1dd430a2-0c5e-4acc-9123-6bee2f09aa67\") " Feb 27 19:10:29 crc kubenswrapper[4981]: I0227 19:10:29.286265 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd430a2-0c5e-4acc-9123-6bee2f09aa67-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1dd430a2-0c5e-4acc-9123-6bee2f09aa67" (UID: "1dd430a2-0c5e-4acc-9123-6bee2f09aa67"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:10:29 crc kubenswrapper[4981]: I0227 19:10:29.293975 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd430a2-0c5e-4acc-9123-6bee2f09aa67-kube-api-access-nzr8x" (OuterVolumeSpecName: "kube-api-access-nzr8x") pod "1dd430a2-0c5e-4acc-9123-6bee2f09aa67" (UID: "1dd430a2-0c5e-4acc-9123-6bee2f09aa67"). InnerVolumeSpecName "kube-api-access-nzr8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:10:29 crc kubenswrapper[4981]: I0227 19:10:29.387073 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1dd430a2-0c5e-4acc-9123-6bee2f09aa67-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:10:29 crc kubenswrapper[4981]: I0227 19:10:29.387123 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzr8x\" (UniqueName: \"kubernetes.io/projected/1dd430a2-0c5e-4acc-9123-6bee2f09aa67-kube-api-access-nzr8x\") on node \"crc\" DevicePath \"\"" Feb 27 19:10:29 crc kubenswrapper[4981]: I0227 19:10:29.393613 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-spmns" event={"ID":"1dd430a2-0c5e-4acc-9123-6bee2f09aa67","Type":"ContainerDied","Data":"38dd0b1a9c0d5c2588d81a1e9c3894b22e948952446434fe0f86996ac4de154c"} Feb 27 19:10:29 crc kubenswrapper[4981]: I0227 19:10:29.393662 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38dd0b1a9c0d5c2588d81a1e9c3894b22e948952446434fe0f86996ac4de154c" Feb 27 19:10:29 crc kubenswrapper[4981]: I0227 19:10:29.393630 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-spmns" Feb 27 19:10:30 crc kubenswrapper[4981]: I0227 19:10:30.190885 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536990-lrfs7" Feb 27 19:10:30 crc kubenswrapper[4981]: I0227 19:10:30.286527 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbprs\" (UniqueName: \"kubernetes.io/projected/eac9845d-e69d-4927-93c6-ec79af3de438-kube-api-access-sbprs\") pod \"eac9845d-e69d-4927-93c6-ec79af3de438\" (UID: \"eac9845d-e69d-4927-93c6-ec79af3de438\") " Feb 27 19:10:30 crc kubenswrapper[4981]: I0227 19:10:30.290576 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eac9845d-e69d-4927-93c6-ec79af3de438-kube-api-access-sbprs" (OuterVolumeSpecName: "kube-api-access-sbprs") pod "eac9845d-e69d-4927-93c6-ec79af3de438" (UID: "eac9845d-e69d-4927-93c6-ec79af3de438"). InnerVolumeSpecName "kube-api-access-sbprs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:10:30 crc kubenswrapper[4981]: I0227 19:10:30.388967 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbprs\" (UniqueName: \"kubernetes.io/projected/eac9845d-e69d-4927-93c6-ec79af3de438-kube-api-access-sbprs\") on node \"crc\" DevicePath \"\"" Feb 27 19:10:30 crc kubenswrapper[4981]: I0227 19:10:30.407476 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536990-lrfs7" event={"ID":"eac9845d-e69d-4927-93c6-ec79af3de438","Type":"ContainerDied","Data":"c3c3960bfca3141062eb7fa540120f01dba357230883ea1acd9f80cf3cdd4380"} Feb 27 19:10:30 crc kubenswrapper[4981]: I0227 19:10:30.407529 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3c3960bfca3141062eb7fa540120f01dba357230883ea1acd9f80cf3cdd4380" Feb 27 19:10:30 crc kubenswrapper[4981]: I0227 19:10:30.407554 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536990-lrfs7" Feb 27 19:10:30 crc kubenswrapper[4981]: I0227 19:10:30.412984 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536984-jjx78"] Feb 27 19:10:30 crc kubenswrapper[4981]: I0227 19:10:30.420523 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536984-jjx78"] Feb 27 19:10:31 crc kubenswrapper[4981]: I0227 19:10:31.656720 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cce3f7fc-3761-458a-91ed-53ff41805400" path="/var/lib/kubelet/pods/cce3f7fc-3761-458a-91ed-53ff41805400/volumes" Feb 27 19:10:42 crc kubenswrapper[4981]: I0227 19:10:42.527957 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-crjbt" event={"ID":"c7ebc81e-dae3-428f-9401-ddead1a42cec","Type":"ContainerStarted","Data":"364f04cea6048434f5f847011578a18118dcee127078a31d7bd0e8cabfdf8b4b"} Feb 27 19:10:42 crc kubenswrapper[4981]: I0227 19:10:42.542108 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerStarted","Data":"340a6d7be188f87cef0feaea5f958cc9043c49411edd955b9683aab0230bb9ce"} Feb 27 19:10:42 crc kubenswrapper[4981]: I0227 19:10:42.542153 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerStarted","Data":"24a7799c5cd63e35072f81b37d3932a76fad3192143aeadfc8474ce31dd7dd07"} Feb 27 19:10:43 crc kubenswrapper[4981]: I0227 19:10:43.561948 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerStarted","Data":"93c87ecb8d8bad33d71e9078051a7748cc757e16bcf80e48a23944e5c1b69077"} Feb 27 19:10:44 crc kubenswrapper[4981]: I0227 19:10:44.570985 4981 generic.go:334] "Generic (PLEG): container finished" podID="c44ff793-41da-4b74-b057-f4b3596eeb9d" containerID="43954ed89f1a5a50cab1e0763369b500e72de40de24f9ee7294e4748d6f94e76" exitCode=0 Feb 27 19:10:44 crc kubenswrapper[4981]: I0227 19:10:44.571097 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-421d-account-create-update-pkhb5" event={"ID":"c44ff793-41da-4b74-b057-f4b3596eeb9d","Type":"ContainerDied","Data":"43954ed89f1a5a50cab1e0763369b500e72de40de24f9ee7294e4748d6f94e76"} Feb 27 19:10:44 crc kubenswrapper[4981]: I0227 19:10:44.577297 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerStarted","Data":"77798546322cfdb767abb826f6d72d37c7c97fa182b47831196724af9d277123"} Feb 27 19:10:44 crc kubenswrapper[4981]: I0227 19:10:44.604955 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-crjbt" podStartSLOduration=8.453692421 podStartE2EDuration="1m14.604938163s" podCreationTimestamp="2026-02-27 19:09:30 +0000 UTC" firstStartedPulling="2026-02-27 19:09:35.162593956 +0000 UTC m=+1474.641375126" lastFinishedPulling="2026-02-27 19:10:41.313839668 +0000 UTC m=+1540.792620868" observedRunningTime="2026-02-27 19:10:42.565395303 +0000 UTC m=+1542.044176473" watchObservedRunningTime="2026-02-27 19:10:44.604938163 +0000 UTC m=+1544.083719323" Feb 27 19:10:44 crc kubenswrapper[4981]: I0227 19:10:44.657166 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=40.221529756 podStartE2EDuration="3m20.657146904s" podCreationTimestamp="2026-02-27 19:07:24 +0000 UTC" firstStartedPulling="2026-02-27 19:08:00.876011442 +0000 UTC m=+1380.354792602" lastFinishedPulling="2026-02-27 19:10:41.31162859 +0000 UTC m=+1540.790409750" observedRunningTime="2026-02-27 19:10:44.648995233 +0000 UTC m=+1544.127776393" watchObservedRunningTime="2026-02-27 19:10:44.657146904 +0000 UTC m=+1544.135928064" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.094727 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-kcc2s"] Feb 27 19:10:45 crc kubenswrapper[4981]: E0227 19:10:45.095394 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b46d876a-df60-46ef-a33d-6f2ddb4261f6" containerName="mariadb-database-create" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.095413 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b46d876a-df60-46ef-a33d-6f2ddb4261f6" containerName="mariadb-database-create" Feb 27 19:10:45 crc kubenswrapper[4981]: E0227 19:10:45.095425 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd430a2-0c5e-4acc-9123-6bee2f09aa67" containerName="mariadb-database-create" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.095431 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd430a2-0c5e-4acc-9123-6bee2f09aa67" containerName="mariadb-database-create" Feb 27 19:10:45 crc kubenswrapper[4981]: E0227 19:10:45.095446 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ba26f0-21ac-43b2-a954-3ab2b764cc7d" containerName="mariadb-database-create" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.095452 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ba26f0-21ac-43b2-a954-3ab2b764cc7d" containerName="mariadb-database-create" Feb 27 19:10:45 crc kubenswrapper[4981]: E0227 19:10:45.095465 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac9845d-e69d-4927-93c6-ec79af3de438" containerName="oc" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.095470 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac9845d-e69d-4927-93c6-ec79af3de438" containerName="oc" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.095623 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dd430a2-0c5e-4acc-9123-6bee2f09aa67" containerName="mariadb-database-create" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.095639 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ba26f0-21ac-43b2-a954-3ab2b764cc7d" containerName="mariadb-database-create" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.095648 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="eac9845d-e69d-4927-93c6-ec79af3de438" containerName="oc" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.095656 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="b46d876a-df60-46ef-a33d-6f2ddb4261f6" containerName="mariadb-database-create" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.096419 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.100077 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.105620 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-kcc2s\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.105694 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-config\") pod \"dnsmasq-dns-764c5664d7-kcc2s\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.105742 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-dns-svc\") pod \"dnsmasq-dns-764c5664d7-kcc2s\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.105813 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-kcc2s\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.105837 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-kcc2s\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.105944 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kpnf\" (UniqueName: \"kubernetes.io/projected/ad5174ae-aa09-4234-b2dc-69d19d951501-kube-api-access-7kpnf\") pod \"dnsmasq-dns-764c5664d7-kcc2s\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.118110 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-kcc2s"] Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.207148 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-kcc2s\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.207201 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-kcc2s\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.207226 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kpnf\" (UniqueName: \"kubernetes.io/projected/ad5174ae-aa09-4234-b2dc-69d19d951501-kube-api-access-7kpnf\") pod \"dnsmasq-dns-764c5664d7-kcc2s\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.207270 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-kcc2s\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.207310 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-config\") pod \"dnsmasq-dns-764c5664d7-kcc2s\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.207332 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-dns-svc\") pod \"dnsmasq-dns-764c5664d7-kcc2s\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.208107 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-dns-svc\") pod \"dnsmasq-dns-764c5664d7-kcc2s\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.208620 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-kcc2s\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.209165 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-kcc2s\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.209930 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-kcc2s\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.210446 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-config\") pod \"dnsmasq-dns-764c5664d7-kcc2s\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.238652 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kpnf\" (UniqueName: \"kubernetes.io/projected/ad5174ae-aa09-4234-b2dc-69d19d951501-kube-api-access-7kpnf\") pod \"dnsmasq-dns-764c5664d7-kcc2s\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.434080 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.701598 4981 generic.go:334] "Generic (PLEG): container finished" podID="6fed081d-f826-4383-b919-126d6a2aa92d" containerID="531958d8e14e7d34b3f90789e5e2637a638062c6d362ab894c4bc534b9ce119c" exitCode=0 Feb 27 19:10:45 crc kubenswrapper[4981]: I0227 19:10:45.702406 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9ea4-account-create-update-ggv5q" event={"ID":"6fed081d-f826-4383-b919-126d6a2aa92d","Type":"ContainerDied","Data":"531958d8e14e7d34b3f90789e5e2637a638062c6d362ab894c4bc534b9ce119c"} Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.089571 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-421d-account-create-update-pkhb5" Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.093643 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9ea4-account-create-update-ggv5q" Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.189337 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fed081d-f826-4383-b919-126d6a2aa92d-operator-scripts\") pod \"6fed081d-f826-4383-b919-126d6a2aa92d\" (UID: \"6fed081d-f826-4383-b919-126d6a2aa92d\") " Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.189405 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44ff793-41da-4b74-b057-f4b3596eeb9d-operator-scripts\") pod \"c44ff793-41da-4b74-b057-f4b3596eeb9d\" (UID: \"c44ff793-41da-4b74-b057-f4b3596eeb9d\") " Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.189435 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmx86\" (UniqueName: \"kubernetes.io/projected/c44ff793-41da-4b74-b057-f4b3596eeb9d-kube-api-access-wmx86\") pod \"c44ff793-41da-4b74-b057-f4b3596eeb9d\" (UID: \"c44ff793-41da-4b74-b057-f4b3596eeb9d\") " Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.189478 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwp9k\" (UniqueName: \"kubernetes.io/projected/6fed081d-f826-4383-b919-126d6a2aa92d-kube-api-access-zwp9k\") pod \"6fed081d-f826-4383-b919-126d6a2aa92d\" (UID: \"6fed081d-f826-4383-b919-126d6a2aa92d\") " Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.191272 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fed081d-f826-4383-b919-126d6a2aa92d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fed081d-f826-4383-b919-126d6a2aa92d" (UID: "6fed081d-f826-4383-b919-126d6a2aa92d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.200736 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44ff793-41da-4b74-b057-f4b3596eeb9d-kube-api-access-wmx86" (OuterVolumeSpecName: "kube-api-access-wmx86") pod "c44ff793-41da-4b74-b057-f4b3596eeb9d" (UID: "c44ff793-41da-4b74-b057-f4b3596eeb9d"). InnerVolumeSpecName "kube-api-access-wmx86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.210644 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-kcc2s"] Feb 27 19:10:47 crc kubenswrapper[4981]: W0227 19:10:47.211934 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad5174ae_aa09_4234_b2dc_69d19d951501.slice/crio-5f6fb0362a7f7716a420007e3aaa0936c9d35f7506bc28ee1ba56c9b47bf250b WatchSource:0}: Error finding container 5f6fb0362a7f7716a420007e3aaa0936c9d35f7506bc28ee1ba56c9b47bf250b: Status 404 returned error can't find the container with id 5f6fb0362a7f7716a420007e3aaa0936c9d35f7506bc28ee1ba56c9b47bf250b Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.213673 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fed081d-f826-4383-b919-126d6a2aa92d-kube-api-access-zwp9k" (OuterVolumeSpecName: "kube-api-access-zwp9k") pod "6fed081d-f826-4383-b919-126d6a2aa92d" (UID: "6fed081d-f826-4383-b919-126d6a2aa92d"). InnerVolumeSpecName "kube-api-access-zwp9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.254254 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c44ff793-41da-4b74-b057-f4b3596eeb9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c44ff793-41da-4b74-b057-f4b3596eeb9d" (UID: "c44ff793-41da-4b74-b057-f4b3596eeb9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.291808 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fed081d-f826-4383-b919-126d6a2aa92d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.291841 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44ff793-41da-4b74-b057-f4b3596eeb9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.291851 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmx86\" (UniqueName: \"kubernetes.io/projected/c44ff793-41da-4b74-b057-f4b3596eeb9d-kube-api-access-wmx86\") on node \"crc\" DevicePath \"\"" Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.291863 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwp9k\" (UniqueName: \"kubernetes.io/projected/6fed081d-f826-4383-b919-126d6a2aa92d-kube-api-access-zwp9k\") on node \"crc\" DevicePath \"\"" Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.733181 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9ea4-account-create-update-ggv5q" Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.733168 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9ea4-account-create-update-ggv5q" event={"ID":"6fed081d-f826-4383-b919-126d6a2aa92d","Type":"ContainerDied","Data":"3576b599df01ead447891be7b04c8cf1212ecd1da5263318d3c7817798529819"} Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.733293 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3576b599df01ead447891be7b04c8cf1212ecd1da5263318d3c7817798529819" Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.737982 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" event={"ID":"ad5174ae-aa09-4234-b2dc-69d19d951501","Type":"ContainerStarted","Data":"6b54c45daadaacd78170538a14fa9845b30552324d351b3d6a41e8d1204afad7"} Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.738035 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" event={"ID":"ad5174ae-aa09-4234-b2dc-69d19d951501","Type":"ContainerStarted","Data":"5f6fb0362a7f7716a420007e3aaa0936c9d35f7506bc28ee1ba56c9b47bf250b"} Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.741709 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-421d-account-create-update-pkhb5" Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.741707 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-421d-account-create-update-pkhb5" event={"ID":"c44ff793-41da-4b74-b057-f4b3596eeb9d","Type":"ContainerDied","Data":"264bc32c96fe46e08427489d4b6fc3e9d0ec6b7689ce877e7993568413212ba2"} Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.741825 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="264bc32c96fe46e08427489d4b6fc3e9d0ec6b7689ce877e7993568413212ba2" Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.745166 4981 generic.go:334] "Generic (PLEG): container finished" podID="94eef5c5-d31c-4759-995e-ce36727018f1" containerID="60ea99e8d510f6df63673bce3568154a3b9731d5509747db3b548c69fc6d391a" exitCode=0 Feb 27 19:10:47 crc kubenswrapper[4981]: I0227 19:10:47.745195 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cb28-account-create-update-wm6rr" event={"ID":"94eef5c5-d31c-4759-995e-ce36727018f1","Type":"ContainerDied","Data":"60ea99e8d510f6df63673bce3568154a3b9731d5509747db3b548c69fc6d391a"} Feb 27 19:10:49 crc kubenswrapper[4981]: I0227 19:10:49.219362 4981 generic.go:334] "Generic (PLEG): container finished" podID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerID="6b54c45daadaacd78170538a14fa9845b30552324d351b3d6a41e8d1204afad7" exitCode=0 Feb 27 19:10:49 crc kubenswrapper[4981]: I0227 19:10:49.219473 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" event={"ID":"ad5174ae-aa09-4234-b2dc-69d19d951501","Type":"ContainerDied","Data":"6b54c45daadaacd78170538a14fa9845b30552324d351b3d6a41e8d1204afad7"} Feb 27 19:10:49 crc kubenswrapper[4981]: I0227 19:10:49.549712 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cb28-account-create-update-wm6rr" Feb 27 19:10:49 crc kubenswrapper[4981]: I0227 19:10:49.658155 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94eef5c5-d31c-4759-995e-ce36727018f1-operator-scripts\") pod \"94eef5c5-d31c-4759-995e-ce36727018f1\" (UID: \"94eef5c5-d31c-4759-995e-ce36727018f1\") " Feb 27 19:10:49 crc kubenswrapper[4981]: I0227 19:10:49.658401 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq86n\" (UniqueName: \"kubernetes.io/projected/94eef5c5-d31c-4759-995e-ce36727018f1-kube-api-access-dq86n\") pod \"94eef5c5-d31c-4759-995e-ce36727018f1\" (UID: \"94eef5c5-d31c-4759-995e-ce36727018f1\") " Feb 27 19:10:49 crc kubenswrapper[4981]: I0227 19:10:49.658888 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94eef5c5-d31c-4759-995e-ce36727018f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94eef5c5-d31c-4759-995e-ce36727018f1" (UID: "94eef5c5-d31c-4759-995e-ce36727018f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:10:49 crc kubenswrapper[4981]: I0227 19:10:49.665869 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94eef5c5-d31c-4759-995e-ce36727018f1-kube-api-access-dq86n" (OuterVolumeSpecName: "kube-api-access-dq86n") pod "94eef5c5-d31c-4759-995e-ce36727018f1" (UID: "94eef5c5-d31c-4759-995e-ce36727018f1"). InnerVolumeSpecName "kube-api-access-dq86n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:10:49 crc kubenswrapper[4981]: I0227 19:10:49.760205 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94eef5c5-d31c-4759-995e-ce36727018f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:10:49 crc kubenswrapper[4981]: I0227 19:10:49.761033 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq86n\" (UniqueName: \"kubernetes.io/projected/94eef5c5-d31c-4759-995e-ce36727018f1-kube-api-access-dq86n\") on node \"crc\" DevicePath \"\"" Feb 27 19:10:50 crc kubenswrapper[4981]: I0227 19:10:50.246710 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" event={"ID":"ad5174ae-aa09-4234-b2dc-69d19d951501","Type":"ContainerStarted","Data":"17b63b7be8f3e351a48a0cab6cd2c2ec27e903a8a8d02a97740b2891c4da7407"} Feb 27 19:10:50 crc kubenswrapper[4981]: I0227 19:10:50.248842 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:50 crc kubenswrapper[4981]: I0227 19:10:50.253932 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cb28-account-create-update-wm6rr" event={"ID":"94eef5c5-d31c-4759-995e-ce36727018f1","Type":"ContainerDied","Data":"50215ca0b79454673ba5b99d17855b86cde44c625cdd319a79beedb9ec75da37"} Feb 27 19:10:50 crc kubenswrapper[4981]: I0227 19:10:50.253978 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50215ca0b79454673ba5b99d17855b86cde44c625cdd319a79beedb9ec75da37" Feb 27 19:10:50 crc kubenswrapper[4981]: I0227 19:10:50.254047 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cb28-account-create-update-wm6rr" Feb 27 19:10:50 crc kubenswrapper[4981]: I0227 19:10:50.271810 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" podStartSLOduration=5.271785835 podStartE2EDuration="5.271785835s" podCreationTimestamp="2026-02-27 19:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:10:50.269507094 +0000 UTC m=+1549.748288264" watchObservedRunningTime="2026-02-27 19:10:50.271785835 +0000 UTC m=+1549.750566995" Feb 27 19:10:52 crc kubenswrapper[4981]: I0227 19:10:52.687669 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qxfqp"] Feb 27 19:10:52 crc kubenswrapper[4981]: E0227 19:10:52.688732 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44ff793-41da-4b74-b057-f4b3596eeb9d" containerName="mariadb-account-create-update" Feb 27 19:10:52 crc kubenswrapper[4981]: I0227 19:10:52.688747 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44ff793-41da-4b74-b057-f4b3596eeb9d" containerName="mariadb-account-create-update" Feb 27 19:10:52 crc kubenswrapper[4981]: E0227 19:10:52.688764 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fed081d-f826-4383-b919-126d6a2aa92d" containerName="mariadb-account-create-update" Feb 27 19:10:52 crc kubenswrapper[4981]: I0227 19:10:52.688771 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fed081d-f826-4383-b919-126d6a2aa92d" containerName="mariadb-account-create-update" Feb 27 19:10:52 crc kubenswrapper[4981]: E0227 19:10:52.688779 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94eef5c5-d31c-4759-995e-ce36727018f1" containerName="mariadb-account-create-update" Feb 27 19:10:52 crc kubenswrapper[4981]: I0227 19:10:52.688785 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="94eef5c5-d31c-4759-995e-ce36727018f1" containerName="mariadb-account-create-update" Feb 27 19:10:52 crc kubenswrapper[4981]: I0227 19:10:52.688952 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fed081d-f826-4383-b919-126d6a2aa92d" containerName="mariadb-account-create-update" Feb 27 19:10:52 crc kubenswrapper[4981]: I0227 19:10:52.688964 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="94eef5c5-d31c-4759-995e-ce36727018f1" containerName="mariadb-account-create-update" Feb 27 19:10:52 crc kubenswrapper[4981]: I0227 19:10:52.688979 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44ff793-41da-4b74-b057-f4b3596eeb9d" containerName="mariadb-account-create-update" Feb 27 19:10:52 crc kubenswrapper[4981]: I0227 19:10:52.690235 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxfqp" Feb 27 19:10:52 crc kubenswrapper[4981]: I0227 19:10:52.703549 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qxfqp"] Feb 27 19:10:52 crc kubenswrapper[4981]: I0227 19:10:52.812148 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/439639b6-502e-4e24-b135-ef52d6c5f1bb-utilities\") pod \"community-operators-qxfqp\" (UID: \"439639b6-502e-4e24-b135-ef52d6c5f1bb\") " pod="openshift-marketplace/community-operators-qxfqp" Feb 27 19:10:52 crc kubenswrapper[4981]: I0227 19:10:52.812207 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kfpj\" (UniqueName: \"kubernetes.io/projected/439639b6-502e-4e24-b135-ef52d6c5f1bb-kube-api-access-8kfpj\") pod \"community-operators-qxfqp\" (UID: \"439639b6-502e-4e24-b135-ef52d6c5f1bb\") " pod="openshift-marketplace/community-operators-qxfqp" Feb 27 19:10:52 crc kubenswrapper[4981]: I0227 19:10:52.812235 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/439639b6-502e-4e24-b135-ef52d6c5f1bb-catalog-content\") pod \"community-operators-qxfqp\" (UID: \"439639b6-502e-4e24-b135-ef52d6c5f1bb\") " pod="openshift-marketplace/community-operators-qxfqp" Feb 27 19:10:52 crc kubenswrapper[4981]: I0227 19:10:52.914890 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/439639b6-502e-4e24-b135-ef52d6c5f1bb-utilities\") pod \"community-operators-qxfqp\" (UID: \"439639b6-502e-4e24-b135-ef52d6c5f1bb\") " pod="openshift-marketplace/community-operators-qxfqp" Feb 27 19:10:52 crc kubenswrapper[4981]: I0227 19:10:52.915003 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kfpj\" (UniqueName: \"kubernetes.io/projected/439639b6-502e-4e24-b135-ef52d6c5f1bb-kube-api-access-8kfpj\") pod \"community-operators-qxfqp\" (UID: \"439639b6-502e-4e24-b135-ef52d6c5f1bb\") " pod="openshift-marketplace/community-operators-qxfqp" Feb 27 19:10:52 crc kubenswrapper[4981]: I0227 19:10:52.915090 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/439639b6-502e-4e24-b135-ef52d6c5f1bb-catalog-content\") pod \"community-operators-qxfqp\" (UID: \"439639b6-502e-4e24-b135-ef52d6c5f1bb\") " pod="openshift-marketplace/community-operators-qxfqp" Feb 27 19:10:52 crc kubenswrapper[4981]: I0227 19:10:52.915577 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/439639b6-502e-4e24-b135-ef52d6c5f1bb-utilities\") pod \"community-operators-qxfqp\" (UID: \"439639b6-502e-4e24-b135-ef52d6c5f1bb\") " pod="openshift-marketplace/community-operators-qxfqp" Feb 27 19:10:52 crc kubenswrapper[4981]: I0227 19:10:52.915769 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/439639b6-502e-4e24-b135-ef52d6c5f1bb-catalog-content\") pod \"community-operators-qxfqp\" (UID: \"439639b6-502e-4e24-b135-ef52d6c5f1bb\") " pod="openshift-marketplace/community-operators-qxfqp" Feb 27 19:10:52 crc kubenswrapper[4981]: I0227 19:10:52.942567 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kfpj\" (UniqueName: \"kubernetes.io/projected/439639b6-502e-4e24-b135-ef52d6c5f1bb-kube-api-access-8kfpj\") pod \"community-operators-qxfqp\" (UID: \"439639b6-502e-4e24-b135-ef52d6c5f1bb\") " pod="openshift-marketplace/community-operators-qxfqp" Feb 27 19:10:53 crc kubenswrapper[4981]: I0227 19:10:53.006521 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxfqp" Feb 27 19:10:53 crc kubenswrapper[4981]: I0227 19:10:53.410432 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qxfqp"] Feb 27 19:10:54 crc kubenswrapper[4981]: I0227 19:10:54.328179 4981 generic.go:334] "Generic (PLEG): container finished" podID="439639b6-502e-4e24-b135-ef52d6c5f1bb" containerID="3a549892776f643ebbfe5925664950fe0af6d495baff1bb5775b91019dcfc8b9" exitCode=0 Feb 27 19:10:54 crc kubenswrapper[4981]: I0227 19:10:54.328365 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxfqp" event={"ID":"439639b6-502e-4e24-b135-ef52d6c5f1bb","Type":"ContainerDied","Data":"3a549892776f643ebbfe5925664950fe0af6d495baff1bb5775b91019dcfc8b9"} Feb 27 19:10:54 crc kubenswrapper[4981]: I0227 19:10:54.328666 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxfqp" event={"ID":"439639b6-502e-4e24-b135-ef52d6c5f1bb","Type":"ContainerStarted","Data":"fee50d66e546af2bb7cc68d1670b8c5a2262141ad6bf711afc4801dc8cfc1a09"} Feb 27 19:10:55 crc kubenswrapper[4981]: I0227 19:10:55.436305 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:10:55 crc kubenswrapper[4981]: I0227 19:10:55.747776 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cff9l"] Feb 27 19:10:55 crc kubenswrapper[4981]: I0227 19:10:55.748089 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-cff9l" podUID="e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c" containerName="dnsmasq-dns" containerID="cri-o://0bd7dd69404c0f054ec7614f54e37bb8d573b13b49e99519637e359c83731f43" gracePeriod=10 Feb 27 19:10:56 crc kubenswrapper[4981]: I0227 19:10:56.347287 4981 generic.go:334] "Generic (PLEG): container finished" podID="e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c" containerID="0bd7dd69404c0f054ec7614f54e37bb8d573b13b49e99519637e359c83731f43" exitCode=0 Feb 27 19:10:56 crc kubenswrapper[4981]: I0227 19:10:56.347380 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cff9l" event={"ID":"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c","Type":"ContainerDied","Data":"0bd7dd69404c0f054ec7614f54e37bb8d573b13b49e99519637e359c83731f43"} Feb 27 19:10:56 crc kubenswrapper[4981]: I0227 19:10:56.847613 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:10:56 crc kubenswrapper[4981]: I0227 19:10:56.889511 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgjsb\" (UniqueName: \"kubernetes.io/projected/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-kube-api-access-mgjsb\") pod \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\" (UID: \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\") " Feb 27 19:10:56 crc kubenswrapper[4981]: I0227 19:10:56.889649 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-ovsdbserver-sb\") pod \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\" (UID: \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\") " Feb 27 19:10:56 crc kubenswrapper[4981]: I0227 19:10:56.889688 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-config\") pod \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\" (UID: \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\") " Feb 27 19:10:56 crc kubenswrapper[4981]: I0227 19:10:56.889730 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-ovsdbserver-nb\") pod \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\" (UID: \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\") " Feb 27 19:10:56 crc kubenswrapper[4981]: I0227 19:10:56.889783 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-dns-svc\") pod \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\" (UID: \"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c\") " Feb 27 19:10:56 crc kubenswrapper[4981]: I0227 19:10:56.903518 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-kube-api-access-mgjsb" (OuterVolumeSpecName: "kube-api-access-mgjsb") pod "e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c" (UID: "e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c"). InnerVolumeSpecName "kube-api-access-mgjsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:10:56 crc kubenswrapper[4981]: I0227 19:10:56.962267 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c" (UID: "e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:10:56 crc kubenswrapper[4981]: I0227 19:10:56.962643 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c" (UID: "e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:10:56 crc kubenswrapper[4981]: I0227 19:10:56.963979 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-config" (OuterVolumeSpecName: "config") pod "e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c" (UID: "e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:10:56 crc kubenswrapper[4981]: I0227 19:10:56.990411 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c" (UID: "e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:10:56 crc kubenswrapper[4981]: I0227 19:10:56.991493 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 19:10:56 crc kubenswrapper[4981]: I0227 19:10:56.991526 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:10:56 crc kubenswrapper[4981]: I0227 19:10:56.991536 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 19:10:56 crc kubenswrapper[4981]: I0227 19:10:56.991547 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 19:10:56 crc kubenswrapper[4981]: I0227 19:10:56.991557 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgjsb\" (UniqueName: \"kubernetes.io/projected/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c-kube-api-access-mgjsb\") on node \"crc\" DevicePath \"\"" Feb 27 19:10:57 crc kubenswrapper[4981]: I0227 19:10:57.358462 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-cff9l" Feb 27 19:10:57 crc kubenswrapper[4981]: I0227 19:10:57.359269 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cff9l" event={"ID":"e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c","Type":"ContainerDied","Data":"5ff2c79507c1b289553a0053eee6cf04dd2d0d81cc1e188e59da66521096c81b"} Feb 27 19:10:57 crc kubenswrapper[4981]: I0227 19:10:57.359357 4981 scope.go:117] "RemoveContainer" containerID="0bd7dd69404c0f054ec7614f54e37bb8d573b13b49e99519637e359c83731f43" Feb 27 19:10:57 crc kubenswrapper[4981]: I0227 19:10:57.365444 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxfqp" event={"ID":"439639b6-502e-4e24-b135-ef52d6c5f1bb","Type":"ContainerStarted","Data":"2bc2fb79358d61da54717d8ec9919d6d24c2edc3296f5be93c2d5db86de027f5"} Feb 27 19:10:57 crc kubenswrapper[4981]: I0227 19:10:57.415132 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cff9l"] Feb 27 19:10:57 crc kubenswrapper[4981]: I0227 19:10:57.426005 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cff9l"] Feb 27 19:10:57 crc kubenswrapper[4981]: I0227 19:10:57.643289 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c" path="/var/lib/kubelet/pods/e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c/volumes" Feb 27 19:10:57 crc kubenswrapper[4981]: I0227 19:10:57.973740 4981 scope.go:117] "RemoveContainer" containerID="183ebc75576d51ff1bbfa818c38efc374a0b300c82a944294b04c278f2b02249" Feb 27 19:10:58 crc kubenswrapper[4981]: I0227 19:10:58.375021 4981 generic.go:334] "Generic (PLEG): container finished" podID="439639b6-502e-4e24-b135-ef52d6c5f1bb" containerID="2bc2fb79358d61da54717d8ec9919d6d24c2edc3296f5be93c2d5db86de027f5" exitCode=0 Feb 27 19:10:58 crc kubenswrapper[4981]: I0227 19:10:58.375163 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxfqp" event={"ID":"439639b6-502e-4e24-b135-ef52d6c5f1bb","Type":"ContainerDied","Data":"2bc2fb79358d61da54717d8ec9919d6d24c2edc3296f5be93c2d5db86de027f5"} Feb 27 19:11:07 crc kubenswrapper[4981]: I0227 19:11:07.703425 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxfqp" event={"ID":"439639b6-502e-4e24-b135-ef52d6c5f1bb","Type":"ContainerStarted","Data":"7e10278adbdc82314160d890a6568c6dacc1118e475812087b035e678a1c7404"} Feb 27 19:11:07 crc kubenswrapper[4981]: I0227 19:11:07.728556 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qxfqp" podStartSLOduration=3.093904488 podStartE2EDuration="15.72853292s" podCreationTimestamp="2026-02-27 19:10:52 +0000 UTC" firstStartedPulling="2026-02-27 19:10:54.33001483 +0000 UTC m=+1553.808795990" lastFinishedPulling="2026-02-27 19:11:06.964643262 +0000 UTC m=+1566.443424422" observedRunningTime="2026-02-27 19:11:07.721465714 +0000 UTC m=+1567.200246874" watchObservedRunningTime="2026-02-27 19:11:07.72853292 +0000 UTC m=+1567.207314080" Feb 27 19:11:11 crc kubenswrapper[4981]: I0227 19:11:11.391747 4981 scope.go:117] "RemoveContainer" containerID="37b76be44e8849910777036c66ac1eee0d414433ae844364d0a0017f22fd72cb" Feb 27 19:11:13 crc kubenswrapper[4981]: I0227 19:11:13.007034 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qxfqp" Feb 27 19:11:13 crc kubenswrapper[4981]: I0227 19:11:13.008313 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qxfqp" Feb 27 19:11:13 crc kubenswrapper[4981]: I0227 19:11:13.076457 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qxfqp" Feb 27 19:11:13 crc kubenswrapper[4981]: I0227 19:11:13.812743 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qxfqp" Feb 27 19:11:13 crc kubenswrapper[4981]: I0227 19:11:13.875836 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qxfqp"] Feb 27 19:11:16 crc kubenswrapper[4981]: I0227 19:11:16.773889 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qxfqp" podUID="439639b6-502e-4e24-b135-ef52d6c5f1bb" containerName="registry-server" containerID="cri-o://7e10278adbdc82314160d890a6568c6dacc1118e475812087b035e678a1c7404" gracePeriod=2 Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.283813 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxfqp" Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.405842 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/439639b6-502e-4e24-b135-ef52d6c5f1bb-utilities\") pod \"439639b6-502e-4e24-b135-ef52d6c5f1bb\" (UID: \"439639b6-502e-4e24-b135-ef52d6c5f1bb\") " Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.405909 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kfpj\" (UniqueName: \"kubernetes.io/projected/439639b6-502e-4e24-b135-ef52d6c5f1bb-kube-api-access-8kfpj\") pod \"439639b6-502e-4e24-b135-ef52d6c5f1bb\" (UID: \"439639b6-502e-4e24-b135-ef52d6c5f1bb\") " Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.405975 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/439639b6-502e-4e24-b135-ef52d6c5f1bb-catalog-content\") pod \"439639b6-502e-4e24-b135-ef52d6c5f1bb\" (UID: \"439639b6-502e-4e24-b135-ef52d6c5f1bb\") " Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.407834 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/439639b6-502e-4e24-b135-ef52d6c5f1bb-utilities" (OuterVolumeSpecName: "utilities") pod "439639b6-502e-4e24-b135-ef52d6c5f1bb" (UID: "439639b6-502e-4e24-b135-ef52d6c5f1bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.417986 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/439639b6-502e-4e24-b135-ef52d6c5f1bb-kube-api-access-8kfpj" (OuterVolumeSpecName: "kube-api-access-8kfpj") pod "439639b6-502e-4e24-b135-ef52d6c5f1bb" (UID: "439639b6-502e-4e24-b135-ef52d6c5f1bb"). InnerVolumeSpecName "kube-api-access-8kfpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.456285 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/439639b6-502e-4e24-b135-ef52d6c5f1bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "439639b6-502e-4e24-b135-ef52d6c5f1bb" (UID: "439639b6-502e-4e24-b135-ef52d6c5f1bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.508608 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/439639b6-502e-4e24-b135-ef52d6c5f1bb-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.508642 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kfpj\" (UniqueName: \"kubernetes.io/projected/439639b6-502e-4e24-b135-ef52d6c5f1bb-kube-api-access-8kfpj\") on node \"crc\" DevicePath \"\"" Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.508651 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/439639b6-502e-4e24-b135-ef52d6c5f1bb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.783650 4981 generic.go:334] "Generic (PLEG): container finished" podID="439639b6-502e-4e24-b135-ef52d6c5f1bb" containerID="7e10278adbdc82314160d890a6568c6dacc1118e475812087b035e678a1c7404" exitCode=0 Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.783699 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxfqp" Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.783699 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxfqp" event={"ID":"439639b6-502e-4e24-b135-ef52d6c5f1bb","Type":"ContainerDied","Data":"7e10278adbdc82314160d890a6568c6dacc1118e475812087b035e678a1c7404"} Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.783818 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxfqp" event={"ID":"439639b6-502e-4e24-b135-ef52d6c5f1bb","Type":"ContainerDied","Data":"fee50d66e546af2bb7cc68d1670b8c5a2262141ad6bf711afc4801dc8cfc1a09"} Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.783837 4981 scope.go:117] "RemoveContainer" containerID="7e10278adbdc82314160d890a6568c6dacc1118e475812087b035e678a1c7404" Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.809627 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qxfqp"] Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.817167 4981 scope.go:117] "RemoveContainer" containerID="2bc2fb79358d61da54717d8ec9919d6d24c2edc3296f5be93c2d5db86de027f5" Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.817980 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qxfqp"] Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.834510 4981 scope.go:117] "RemoveContainer" containerID="3a549892776f643ebbfe5925664950fe0af6d495baff1bb5775b91019dcfc8b9" Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.871269 4981 scope.go:117] "RemoveContainer" containerID="7e10278adbdc82314160d890a6568c6dacc1118e475812087b035e678a1c7404" Feb 27 19:11:17 crc kubenswrapper[4981]: E0227 19:11:17.871788 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e10278adbdc82314160d890a6568c6dacc1118e475812087b035e678a1c7404\": container with ID starting with 7e10278adbdc82314160d890a6568c6dacc1118e475812087b035e678a1c7404 not found: ID does not exist" containerID="7e10278adbdc82314160d890a6568c6dacc1118e475812087b035e678a1c7404" Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.871834 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e10278adbdc82314160d890a6568c6dacc1118e475812087b035e678a1c7404"} err="failed to get container status \"7e10278adbdc82314160d890a6568c6dacc1118e475812087b035e678a1c7404\": rpc error: code = NotFound desc = could not find container \"7e10278adbdc82314160d890a6568c6dacc1118e475812087b035e678a1c7404\": container with ID starting with 7e10278adbdc82314160d890a6568c6dacc1118e475812087b035e678a1c7404 not found: ID does not exist" Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.871865 4981 scope.go:117] "RemoveContainer" containerID="2bc2fb79358d61da54717d8ec9919d6d24c2edc3296f5be93c2d5db86de027f5" Feb 27 19:11:17 crc kubenswrapper[4981]: E0227 19:11:17.872562 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bc2fb79358d61da54717d8ec9919d6d24c2edc3296f5be93c2d5db86de027f5\": container with ID starting with 2bc2fb79358d61da54717d8ec9919d6d24c2edc3296f5be93c2d5db86de027f5 not found: ID does not exist" containerID="2bc2fb79358d61da54717d8ec9919d6d24c2edc3296f5be93c2d5db86de027f5" Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.872616 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bc2fb79358d61da54717d8ec9919d6d24c2edc3296f5be93c2d5db86de027f5"} err="failed to get container status \"2bc2fb79358d61da54717d8ec9919d6d24c2edc3296f5be93c2d5db86de027f5\": rpc error: code = NotFound desc = could not find container \"2bc2fb79358d61da54717d8ec9919d6d24c2edc3296f5be93c2d5db86de027f5\": container with ID starting with 2bc2fb79358d61da54717d8ec9919d6d24c2edc3296f5be93c2d5db86de027f5 not found: ID does not exist" Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.872646 4981 scope.go:117] "RemoveContainer" containerID="3a549892776f643ebbfe5925664950fe0af6d495baff1bb5775b91019dcfc8b9" Feb 27 19:11:17 crc kubenswrapper[4981]: E0227 19:11:17.872954 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a549892776f643ebbfe5925664950fe0af6d495baff1bb5775b91019dcfc8b9\": container with ID starting with 3a549892776f643ebbfe5925664950fe0af6d495baff1bb5775b91019dcfc8b9 not found: ID does not exist" containerID="3a549892776f643ebbfe5925664950fe0af6d495baff1bb5775b91019dcfc8b9" Feb 27 19:11:17 crc kubenswrapper[4981]: I0227 19:11:17.872981 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a549892776f643ebbfe5925664950fe0af6d495baff1bb5775b91019dcfc8b9"} err="failed to get container status \"3a549892776f643ebbfe5925664950fe0af6d495baff1bb5775b91019dcfc8b9\": rpc error: code = NotFound desc = could not find container \"3a549892776f643ebbfe5925664950fe0af6d495baff1bb5775b91019dcfc8b9\": container with ID starting with 3a549892776f643ebbfe5925664950fe0af6d495baff1bb5775b91019dcfc8b9 not found: ID does not exist" Feb 27 19:11:19 crc kubenswrapper[4981]: I0227 19:11:19.641268 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="439639b6-502e-4e24-b135-ef52d6c5f1bb" path="/var/lib/kubelet/pods/439639b6-502e-4e24-b135-ef52d6c5f1bb/volumes" Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.249120 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.249188 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.399848 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cw2n6"] Feb 27 19:11:20 crc kubenswrapper[4981]: E0227 19:11:20.400622 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c" containerName="init" Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.400638 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c" containerName="init" Feb 27 19:11:20 crc kubenswrapper[4981]: E0227 19:11:20.400662 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439639b6-502e-4e24-b135-ef52d6c5f1bb" containerName="extract-utilities" Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.400670 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="439639b6-502e-4e24-b135-ef52d6c5f1bb" containerName="extract-utilities" Feb 27 19:11:20 crc kubenswrapper[4981]: E0227 19:11:20.400699 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439639b6-502e-4e24-b135-ef52d6c5f1bb" containerName="registry-server" Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.400708 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="439639b6-502e-4e24-b135-ef52d6c5f1bb" containerName="registry-server" Feb 27 19:11:20 crc kubenswrapper[4981]: E0227 19:11:20.400717 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439639b6-502e-4e24-b135-ef52d6c5f1bb" containerName="extract-content" Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.400724 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="439639b6-502e-4e24-b135-ef52d6c5f1bb" containerName="extract-content" Feb 27 19:11:20 crc kubenswrapper[4981]: E0227 19:11:20.400739 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c" containerName="dnsmasq-dns" Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.400746 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c" containerName="dnsmasq-dns" Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.400966 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="439639b6-502e-4e24-b135-ef52d6c5f1bb" containerName="registry-server" Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.400989 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e77fe10b-25f0-4f0c-99fa-71a43bb1ad5c" containerName="dnsmasq-dns" Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.402556 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cw2n6" Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.421609 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cw2n6"] Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.564869 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dc1c956-2a21-49e5-b253-bde787a32e3a-utilities\") pod \"redhat-marketplace-cw2n6\" (UID: \"1dc1c956-2a21-49e5-b253-bde787a32e3a\") " pod="openshift-marketplace/redhat-marketplace-cw2n6" Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.564996 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dc1c956-2a21-49e5-b253-bde787a32e3a-catalog-content\") pod \"redhat-marketplace-cw2n6\" (UID: \"1dc1c956-2a21-49e5-b253-bde787a32e3a\") " pod="openshift-marketplace/redhat-marketplace-cw2n6" Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.565099 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbp94\" (UniqueName: \"kubernetes.io/projected/1dc1c956-2a21-49e5-b253-bde787a32e3a-kube-api-access-pbp94\") pod \"redhat-marketplace-cw2n6\" (UID: \"1dc1c956-2a21-49e5-b253-bde787a32e3a\") " pod="openshift-marketplace/redhat-marketplace-cw2n6" Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.666562 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbp94\" (UniqueName: \"kubernetes.io/projected/1dc1c956-2a21-49e5-b253-bde787a32e3a-kube-api-access-pbp94\") pod \"redhat-marketplace-cw2n6\" (UID: \"1dc1c956-2a21-49e5-b253-bde787a32e3a\") " pod="openshift-marketplace/redhat-marketplace-cw2n6" Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.666632 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dc1c956-2a21-49e5-b253-bde787a32e3a-utilities\") pod \"redhat-marketplace-cw2n6\" (UID: \"1dc1c956-2a21-49e5-b253-bde787a32e3a\") " pod="openshift-marketplace/redhat-marketplace-cw2n6" Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.666722 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dc1c956-2a21-49e5-b253-bde787a32e3a-catalog-content\") pod \"redhat-marketplace-cw2n6\" (UID: \"1dc1c956-2a21-49e5-b253-bde787a32e3a\") " pod="openshift-marketplace/redhat-marketplace-cw2n6" Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.667275 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dc1c956-2a21-49e5-b253-bde787a32e3a-utilities\") pod \"redhat-marketplace-cw2n6\" (UID: \"1dc1c956-2a21-49e5-b253-bde787a32e3a\") " pod="openshift-marketplace/redhat-marketplace-cw2n6" Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.667306 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dc1c956-2a21-49e5-b253-bde787a32e3a-catalog-content\") pod \"redhat-marketplace-cw2n6\" (UID: \"1dc1c956-2a21-49e5-b253-bde787a32e3a\") " pod="openshift-marketplace/redhat-marketplace-cw2n6" Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.688516 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbp94\" (UniqueName: \"kubernetes.io/projected/1dc1c956-2a21-49e5-b253-bde787a32e3a-kube-api-access-pbp94\") pod \"redhat-marketplace-cw2n6\" (UID: \"1dc1c956-2a21-49e5-b253-bde787a32e3a\") " pod="openshift-marketplace/redhat-marketplace-cw2n6" Feb 27 19:11:20 crc kubenswrapper[4981]: I0227 19:11:20.737549 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cw2n6" Feb 27 19:11:21 crc kubenswrapper[4981]: I0227 19:11:21.231133 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cw2n6"] Feb 27 19:11:21 crc kubenswrapper[4981]: W0227 19:11:21.239517 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1dc1c956_2a21_49e5_b253_bde787a32e3a.slice/crio-e7b12106eb2004b610e62dc47b346e9dee3877227cab1936af345016779a062f WatchSource:0}: Error finding container e7b12106eb2004b610e62dc47b346e9dee3877227cab1936af345016779a062f: Status 404 returned error can't find the container with id e7b12106eb2004b610e62dc47b346e9dee3877227cab1936af345016779a062f Feb 27 19:11:21 crc kubenswrapper[4981]: I0227 19:11:21.822431 4981 generic.go:334] "Generic (PLEG): container finished" podID="1dc1c956-2a21-49e5-b253-bde787a32e3a" containerID="af44b773f409b1cef944bc808134cf43096f347c50ec2b6e721b000c060ef9c7" exitCode=0 Feb 27 19:11:21 crc kubenswrapper[4981]: I0227 19:11:21.822784 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cw2n6" event={"ID":"1dc1c956-2a21-49e5-b253-bde787a32e3a","Type":"ContainerDied","Data":"af44b773f409b1cef944bc808134cf43096f347c50ec2b6e721b000c060ef9c7"} Feb 27 19:11:21 crc kubenswrapper[4981]: I0227 19:11:21.822865 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cw2n6" event={"ID":"1dc1c956-2a21-49e5-b253-bde787a32e3a","Type":"ContainerStarted","Data":"e7b12106eb2004b610e62dc47b346e9dee3877227cab1936af345016779a062f"} Feb 27 19:11:27 crc kubenswrapper[4981]: I0227 19:11:27.890333 4981 generic.go:334] "Generic (PLEG): container finished" podID="1dc1c956-2a21-49e5-b253-bde787a32e3a" containerID="af4b8b6c7683142c0e7eeae166343db932d7a3e0ae88c4e888c079a168b8e747" exitCode=0 Feb 27 19:11:27 crc kubenswrapper[4981]: I0227 19:11:27.890440 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cw2n6" event={"ID":"1dc1c956-2a21-49e5-b253-bde787a32e3a","Type":"ContainerDied","Data":"af4b8b6c7683142c0e7eeae166343db932d7a3e0ae88c4e888c079a168b8e747"} Feb 27 19:11:32 crc kubenswrapper[4981]: I0227 19:11:32.969308 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cw2n6" event={"ID":"1dc1c956-2a21-49e5-b253-bde787a32e3a","Type":"ContainerStarted","Data":"f02bd13f0081642c84e5e9c56aee443b25647f47baf85ac9ba4799a55919a820"} Feb 27 19:11:35 crc kubenswrapper[4981]: I0227 19:11:35.017559 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cw2n6" podStartSLOduration=5.010369231 podStartE2EDuration="15.017535669s" podCreationTimestamp="2026-02-27 19:11:20 +0000 UTC" firstStartedPulling="2026-02-27 19:11:21.825551394 +0000 UTC m=+1581.304332554" lastFinishedPulling="2026-02-27 19:11:31.832717792 +0000 UTC m=+1591.311498992" observedRunningTime="2026-02-27 19:11:35.011430842 +0000 UTC m=+1594.490212002" watchObservedRunningTime="2026-02-27 19:11:35.017535669 +0000 UTC m=+1594.496316829" Feb 27 19:11:40 crc kubenswrapper[4981]: I0227 19:11:40.737703 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cw2n6" Feb 27 19:11:40 crc kubenswrapper[4981]: I0227 19:11:40.738226 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cw2n6" Feb 27 19:11:40 crc kubenswrapper[4981]: I0227 19:11:40.789436 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cw2n6" Feb 27 19:11:41 crc kubenswrapper[4981]: I0227 19:11:41.105042 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cw2n6" Feb 27 19:11:41 crc kubenswrapper[4981]: I0227 19:11:41.168913 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cw2n6"] Feb 27 19:11:43 crc kubenswrapper[4981]: I0227 19:11:43.051396 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cw2n6" podUID="1dc1c956-2a21-49e5-b253-bde787a32e3a" containerName="registry-server" containerID="cri-o://f02bd13f0081642c84e5e9c56aee443b25647f47baf85ac9ba4799a55919a820" gracePeriod=2 Feb 27 19:11:44 crc kubenswrapper[4981]: I0227 19:11:44.083454 4981 generic.go:334] "Generic (PLEG): container finished" podID="1dc1c956-2a21-49e5-b253-bde787a32e3a" containerID="f02bd13f0081642c84e5e9c56aee443b25647f47baf85ac9ba4799a55919a820" exitCode=0 Feb 27 19:11:44 crc kubenswrapper[4981]: I0227 19:11:44.083515 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cw2n6" event={"ID":"1dc1c956-2a21-49e5-b253-bde787a32e3a","Type":"ContainerDied","Data":"f02bd13f0081642c84e5e9c56aee443b25647f47baf85ac9ba4799a55919a820"} Feb 27 19:11:44 crc kubenswrapper[4981]: I0227 19:11:44.353034 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cw2n6" Feb 27 19:11:44 crc kubenswrapper[4981]: I0227 19:11:44.512421 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dc1c956-2a21-49e5-b253-bde787a32e3a-catalog-content\") pod \"1dc1c956-2a21-49e5-b253-bde787a32e3a\" (UID: \"1dc1c956-2a21-49e5-b253-bde787a32e3a\") " Feb 27 19:11:44 crc kubenswrapper[4981]: I0227 19:11:44.512469 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dc1c956-2a21-49e5-b253-bde787a32e3a-utilities\") pod \"1dc1c956-2a21-49e5-b253-bde787a32e3a\" (UID: \"1dc1c956-2a21-49e5-b253-bde787a32e3a\") " Feb 27 19:11:44 crc kubenswrapper[4981]: I0227 19:11:44.512499 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbp94\" (UniqueName: \"kubernetes.io/projected/1dc1c956-2a21-49e5-b253-bde787a32e3a-kube-api-access-pbp94\") pod \"1dc1c956-2a21-49e5-b253-bde787a32e3a\" (UID: \"1dc1c956-2a21-49e5-b253-bde787a32e3a\") " Feb 27 19:11:44 crc kubenswrapper[4981]: I0227 19:11:44.513975 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dc1c956-2a21-49e5-b253-bde787a32e3a-utilities" (OuterVolumeSpecName: "utilities") pod "1dc1c956-2a21-49e5-b253-bde787a32e3a" (UID: "1dc1c956-2a21-49e5-b253-bde787a32e3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:11:44 crc kubenswrapper[4981]: I0227 19:11:44.521560 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dc1c956-2a21-49e5-b253-bde787a32e3a-kube-api-access-pbp94" (OuterVolumeSpecName: "kube-api-access-pbp94") pod "1dc1c956-2a21-49e5-b253-bde787a32e3a" (UID: "1dc1c956-2a21-49e5-b253-bde787a32e3a"). InnerVolumeSpecName "kube-api-access-pbp94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:11:44 crc kubenswrapper[4981]: I0227 19:11:44.559535 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dc1c956-2a21-49e5-b253-bde787a32e3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1dc1c956-2a21-49e5-b253-bde787a32e3a" (UID: "1dc1c956-2a21-49e5-b253-bde787a32e3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:11:44 crc kubenswrapper[4981]: I0227 19:11:44.615451 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dc1c956-2a21-49e5-b253-bde787a32e3a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:11:44 crc kubenswrapper[4981]: I0227 19:11:44.615523 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dc1c956-2a21-49e5-b253-bde787a32e3a-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:11:44 crc kubenswrapper[4981]: I0227 19:11:44.615546 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbp94\" (UniqueName: \"kubernetes.io/projected/1dc1c956-2a21-49e5-b253-bde787a32e3a-kube-api-access-pbp94\") on node \"crc\" DevicePath \"\"" Feb 27 19:11:45 crc kubenswrapper[4981]: I0227 19:11:45.101505 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cw2n6" event={"ID":"1dc1c956-2a21-49e5-b253-bde787a32e3a","Type":"ContainerDied","Data":"e7b12106eb2004b610e62dc47b346e9dee3877227cab1936af345016779a062f"} Feb 27 19:11:45 crc kubenswrapper[4981]: I0227 19:11:45.101576 4981 scope.go:117] "RemoveContainer" containerID="f02bd13f0081642c84e5e9c56aee443b25647f47baf85ac9ba4799a55919a820" Feb 27 19:11:45 crc kubenswrapper[4981]: I0227 19:11:45.101585 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cw2n6" Feb 27 19:11:45 crc kubenswrapper[4981]: I0227 19:11:45.127020 4981 scope.go:117] "RemoveContainer" containerID="af4b8b6c7683142c0e7eeae166343db932d7a3e0ae88c4e888c079a168b8e747" Feb 27 19:11:45 crc kubenswrapper[4981]: I0227 19:11:45.154966 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cw2n6"] Feb 27 19:11:45 crc kubenswrapper[4981]: I0227 19:11:45.163395 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cw2n6"] Feb 27 19:11:45 crc kubenswrapper[4981]: I0227 19:11:45.165781 4981 scope.go:117] "RemoveContainer" containerID="af44b773f409b1cef944bc808134cf43096f347c50ec2b6e721b000c060ef9c7" Feb 27 19:11:45 crc kubenswrapper[4981]: I0227 19:11:45.663597 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dc1c956-2a21-49e5-b253-bde787a32e3a" path="/var/lib/kubelet/pods/1dc1c956-2a21-49e5-b253-bde787a32e3a/volumes" Feb 27 19:11:50 crc kubenswrapper[4981]: I0227 19:11:50.248484 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:11:50 crc kubenswrapper[4981]: I0227 19:11:50.250358 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:12:00 crc kubenswrapper[4981]: I0227 19:12:00.139665 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536992-f82gz"] Feb 27 19:12:00 crc kubenswrapper[4981]: E0227 19:12:00.140565 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc1c956-2a21-49e5-b253-bde787a32e3a" containerName="extract-content" Feb 27 19:12:00 crc kubenswrapper[4981]: I0227 19:12:00.140578 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc1c956-2a21-49e5-b253-bde787a32e3a" containerName="extract-content" Feb 27 19:12:00 crc kubenswrapper[4981]: E0227 19:12:00.140606 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc1c956-2a21-49e5-b253-bde787a32e3a" containerName="registry-server" Feb 27 19:12:00 crc kubenswrapper[4981]: I0227 19:12:00.140613 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc1c956-2a21-49e5-b253-bde787a32e3a" containerName="registry-server" Feb 27 19:12:00 crc kubenswrapper[4981]: E0227 19:12:00.140624 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc1c956-2a21-49e5-b253-bde787a32e3a" containerName="extract-utilities" Feb 27 19:12:00 crc kubenswrapper[4981]: I0227 19:12:00.140631 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc1c956-2a21-49e5-b253-bde787a32e3a" containerName="extract-utilities" Feb 27 19:12:00 crc kubenswrapper[4981]: I0227 19:12:00.140842 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc1c956-2a21-49e5-b253-bde787a32e3a" containerName="registry-server" Feb 27 19:12:00 crc kubenswrapper[4981]: I0227 19:12:00.141389 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536992-f82gz" Feb 27 19:12:00 crc kubenswrapper[4981]: I0227 19:12:00.147409 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:12:00 crc kubenswrapper[4981]: I0227 19:12:00.148304 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:12:00 crc kubenswrapper[4981]: I0227 19:12:00.148600 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:12:00 crc kubenswrapper[4981]: I0227 19:12:00.156138 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwmw9\" (UniqueName: \"kubernetes.io/projected/74677fdb-8a47-430f-9ede-c884ece1c7c0-kube-api-access-nwmw9\") pod \"auto-csr-approver-29536992-f82gz\" (UID: \"74677fdb-8a47-430f-9ede-c884ece1c7c0\") " pod="openshift-infra/auto-csr-approver-29536992-f82gz" Feb 27 19:12:00 crc kubenswrapper[4981]: I0227 19:12:00.157155 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536992-f82gz"] Feb 27 19:12:00 crc kubenswrapper[4981]: I0227 19:12:00.258017 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwmw9\" (UniqueName: \"kubernetes.io/projected/74677fdb-8a47-430f-9ede-c884ece1c7c0-kube-api-access-nwmw9\") pod \"auto-csr-approver-29536992-f82gz\" (UID: \"74677fdb-8a47-430f-9ede-c884ece1c7c0\") " pod="openshift-infra/auto-csr-approver-29536992-f82gz" Feb 27 19:12:00 crc kubenswrapper[4981]: I0227 19:12:00.277641 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwmw9\" (UniqueName: \"kubernetes.io/projected/74677fdb-8a47-430f-9ede-c884ece1c7c0-kube-api-access-nwmw9\") pod \"auto-csr-approver-29536992-f82gz\" (UID: \"74677fdb-8a47-430f-9ede-c884ece1c7c0\") " pod="openshift-infra/auto-csr-approver-29536992-f82gz" Feb 27 19:12:00 crc kubenswrapper[4981]: I0227 19:12:00.459132 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536992-f82gz" Feb 27 19:12:01 crc kubenswrapper[4981]: I0227 19:12:01.082412 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536992-f82gz"] Feb 27 19:12:01 crc kubenswrapper[4981]: I0227 19:12:01.248565 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536992-f82gz" event={"ID":"74677fdb-8a47-430f-9ede-c884ece1c7c0","Type":"ContainerStarted","Data":"994da8624daf2afe84f48478710fa8c2ae6fe312af4d94e8ea0ff267525fa130"} Feb 27 19:12:06 crc kubenswrapper[4981]: I0227 19:12:06.308405 4981 generic.go:334] "Generic (PLEG): container finished" podID="74677fdb-8a47-430f-9ede-c884ece1c7c0" containerID="52679b7eb4bb52e89dc54884eeca292914df6978af804913ebf2b2f17b260c15" exitCode=0 Feb 27 19:12:06 crc kubenswrapper[4981]: I0227 19:12:06.308646 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536992-f82gz" event={"ID":"74677fdb-8a47-430f-9ede-c884ece1c7c0","Type":"ContainerDied","Data":"52679b7eb4bb52e89dc54884eeca292914df6978af804913ebf2b2f17b260c15"} Feb 27 19:12:07 crc kubenswrapper[4981]: I0227 19:12:07.655404 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536992-f82gz" Feb 27 19:12:07 crc kubenswrapper[4981]: I0227 19:12:07.810197 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwmw9\" (UniqueName: \"kubernetes.io/projected/74677fdb-8a47-430f-9ede-c884ece1c7c0-kube-api-access-nwmw9\") pod \"74677fdb-8a47-430f-9ede-c884ece1c7c0\" (UID: \"74677fdb-8a47-430f-9ede-c884ece1c7c0\") " Feb 27 19:12:07 crc kubenswrapper[4981]: I0227 19:12:07.820585 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74677fdb-8a47-430f-9ede-c884ece1c7c0-kube-api-access-nwmw9" (OuterVolumeSpecName: "kube-api-access-nwmw9") pod "74677fdb-8a47-430f-9ede-c884ece1c7c0" (UID: "74677fdb-8a47-430f-9ede-c884ece1c7c0"). InnerVolumeSpecName "kube-api-access-nwmw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:12:07 crc kubenswrapper[4981]: I0227 19:12:07.912380 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwmw9\" (UniqueName: \"kubernetes.io/projected/74677fdb-8a47-430f-9ede-c884ece1c7c0-kube-api-access-nwmw9\") on node \"crc\" DevicePath \"\"" Feb 27 19:12:08 crc kubenswrapper[4981]: I0227 19:12:08.332756 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536992-f82gz" event={"ID":"74677fdb-8a47-430f-9ede-c884ece1c7c0","Type":"ContainerDied","Data":"994da8624daf2afe84f48478710fa8c2ae6fe312af4d94e8ea0ff267525fa130"} Feb 27 19:12:08 crc kubenswrapper[4981]: I0227 19:12:08.332806 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="994da8624daf2afe84f48478710fa8c2ae6fe312af4d94e8ea0ff267525fa130" Feb 27 19:12:08 crc kubenswrapper[4981]: I0227 19:12:08.332921 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536992-f82gz" Feb 27 19:12:08 crc kubenswrapper[4981]: I0227 19:12:08.747368 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536986-48nk8"] Feb 27 19:12:08 crc kubenswrapper[4981]: I0227 19:12:08.760034 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536986-48nk8"] Feb 27 19:12:09 crc kubenswrapper[4981]: I0227 19:12:09.641176 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdb88422-3167-4822-80a5-2e9abcb29904" path="/var/lib/kubelet/pods/fdb88422-3167-4822-80a5-2e9abcb29904/volumes" Feb 27 19:12:11 crc kubenswrapper[4981]: I0227 19:12:11.493624 4981 scope.go:117] "RemoveContainer" containerID="f1769b116d54285295f3d509699a8536ac9a91b18a4131665c502f03f5b4e4fe" Feb 27 19:12:19 crc kubenswrapper[4981]: I0227 19:12:19.460338 4981 generic.go:334] "Generic (PLEG): container finished" podID="c7ebc81e-dae3-428f-9401-ddead1a42cec" containerID="364f04cea6048434f5f847011578a18118dcee127078a31d7bd0e8cabfdf8b4b" exitCode=0 Feb 27 19:12:19 crc kubenswrapper[4981]: I0227 19:12:19.460452 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-crjbt" event={"ID":"c7ebc81e-dae3-428f-9401-ddead1a42cec","Type":"ContainerDied","Data":"364f04cea6048434f5f847011578a18118dcee127078a31d7bd0e8cabfdf8b4b"} Feb 27 19:12:20 crc kubenswrapper[4981]: I0227 19:12:20.249008 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:12:20 crc kubenswrapper[4981]: I0227 19:12:20.249460 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:12:20 crc kubenswrapper[4981]: I0227 19:12:20.249553 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 19:12:20 crc kubenswrapper[4981]: I0227 19:12:20.250497 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba"} pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:12:20 crc kubenswrapper[4981]: I0227 19:12:20.250597 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" containerID="cri-o://d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" gracePeriod=600 Feb 27 19:12:20 crc kubenswrapper[4981]: I0227 19:12:20.474343 4981 generic.go:334] "Generic (PLEG): container finished" podID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" exitCode=0 Feb 27 19:12:20 crc kubenswrapper[4981]: I0227 19:12:20.474458 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerDied","Data":"d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba"} Feb 27 19:12:20 crc kubenswrapper[4981]: I0227 19:12:20.474557 4981 scope.go:117] "RemoveContainer" containerID="03bdafd14e1d7a2332dfab716d224757c23e9832e5c4bc0ebaf94e7e0e277e07" Feb 27 19:12:20 crc kubenswrapper[4981]: I0227 19:12:20.839538 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-crjbt" Feb 27 19:12:20 crc kubenswrapper[4981]: E0227 19:12:20.879765 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:12:20 crc kubenswrapper[4981]: I0227 19:12:20.920115 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dz49\" (UniqueName: \"kubernetes.io/projected/c7ebc81e-dae3-428f-9401-ddead1a42cec-kube-api-access-6dz49\") pod \"c7ebc81e-dae3-428f-9401-ddead1a42cec\" (UID: \"c7ebc81e-dae3-428f-9401-ddead1a42cec\") " Feb 27 19:12:20 crc kubenswrapper[4981]: I0227 19:12:20.920207 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7ebc81e-dae3-428f-9401-ddead1a42cec-config-data\") pod \"c7ebc81e-dae3-428f-9401-ddead1a42cec\" (UID: \"c7ebc81e-dae3-428f-9401-ddead1a42cec\") " Feb 27 19:12:20 crc kubenswrapper[4981]: I0227 19:12:20.920334 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ebc81e-dae3-428f-9401-ddead1a42cec-combined-ca-bundle\") pod \"c7ebc81e-dae3-428f-9401-ddead1a42cec\" (UID: \"c7ebc81e-dae3-428f-9401-ddead1a42cec\") " Feb 27 19:12:20 crc kubenswrapper[4981]: I0227 19:12:20.928114 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7ebc81e-dae3-428f-9401-ddead1a42cec-kube-api-access-6dz49" (OuterVolumeSpecName: "kube-api-access-6dz49") pod "c7ebc81e-dae3-428f-9401-ddead1a42cec" (UID: "c7ebc81e-dae3-428f-9401-ddead1a42cec"). InnerVolumeSpecName "kube-api-access-6dz49". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:12:20 crc kubenswrapper[4981]: I0227 19:12:20.950743 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7ebc81e-dae3-428f-9401-ddead1a42cec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7ebc81e-dae3-428f-9401-ddead1a42cec" (UID: "c7ebc81e-dae3-428f-9401-ddead1a42cec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:12:20 crc kubenswrapper[4981]: I0227 19:12:20.971126 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7ebc81e-dae3-428f-9401-ddead1a42cec-config-data" (OuterVolumeSpecName: "config-data") pod "c7ebc81e-dae3-428f-9401-ddead1a42cec" (UID: "c7ebc81e-dae3-428f-9401-ddead1a42cec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:12:21 crc kubenswrapper[4981]: I0227 19:12:21.027162 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7ebc81e-dae3-428f-9401-ddead1a42cec-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:12:21 crc kubenswrapper[4981]: I0227 19:12:21.027208 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ebc81e-dae3-428f-9401-ddead1a42cec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:12:21 crc kubenswrapper[4981]: I0227 19:12:21.027221 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dz49\" (UniqueName: \"kubernetes.io/projected/c7ebc81e-dae3-428f-9401-ddead1a42cec-kube-api-access-6dz49\") on node \"crc\" DevicePath \"\"" Feb 27 19:12:21 crc kubenswrapper[4981]: I0227 19:12:21.765469 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:12:21 crc kubenswrapper[4981]: E0227 19:12:21.765676 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:12:21 crc kubenswrapper[4981]: I0227 19:12:21.767298 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-crjbt" event={"ID":"c7ebc81e-dae3-428f-9401-ddead1a42cec","Type":"ContainerDied","Data":"7cda73fc3f05087b6fe8b0d757059a51ed75ecc0358eb075ebf1c1cb59abfdab"} Feb 27 19:12:21 crc kubenswrapper[4981]: I0227 19:12:21.767330 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cda73fc3f05087b6fe8b0d757059a51ed75ecc0358eb075ebf1c1cb59abfdab" Feb 27 19:12:21 crc kubenswrapper[4981]: I0227 19:12:21.767389 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-crjbt" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.652098 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-9lps4"] Feb 27 19:12:22 crc kubenswrapper[4981]: E0227 19:12:22.701169 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74677fdb-8a47-430f-9ede-c884ece1c7c0" containerName="oc" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.701215 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="74677fdb-8a47-430f-9ede-c884ece1c7c0" containerName="oc" Feb 27 19:12:22 crc kubenswrapper[4981]: E0227 19:12:22.701248 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7ebc81e-dae3-428f-9401-ddead1a42cec" containerName="keystone-db-sync" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.701260 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7ebc81e-dae3-428f-9401-ddead1a42cec" containerName="keystone-db-sync" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.708685 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7ebc81e-dae3-428f-9401-ddead1a42cec" containerName="keystone-db-sync" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.708757 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="74677fdb-8a47-430f-9ede-c884ece1c7c0" containerName="oc" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.714413 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-fsb96"] Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.714964 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.724836 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.727011 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.728302 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4zdz2" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.738640 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.739016 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.818422 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9lps4"] Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.818639 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.821472 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-fsb96"] Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.923991 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-config\") pod \"dnsmasq-dns-5959f8865f-fsb96\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.924108 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-scripts\") pod \"keystone-bootstrap-9lps4\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.924145 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-credential-keys\") pod \"keystone-bootstrap-9lps4\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.924169 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-fsb96\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.924191 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-config-data\") pod \"keystone-bootstrap-9lps4\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.924216 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgqvk\" (UniqueName: \"kubernetes.io/projected/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-kube-api-access-kgqvk\") pod \"keystone-bootstrap-9lps4\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.924233 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-fernet-keys\") pod \"keystone-bootstrap-9lps4\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.924255 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-combined-ca-bundle\") pod \"keystone-bootstrap-9lps4\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.924297 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-fsb96\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.924316 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq4fs\" (UniqueName: \"kubernetes.io/projected/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-kube-api-access-nq4fs\") pod \"dnsmasq-dns-5959f8865f-fsb96\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.924344 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-fsb96\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.924368 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-dns-svc\") pod \"dnsmasq-dns-5959f8865f-fsb96\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.968836 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7h875"] Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.970009 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7h875" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.972938 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kq5lk" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.973214 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.973339 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.980547 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-68687"] Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.981761 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-68687" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.983785 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.983942 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9995n" Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.992748 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-9rclp"] Feb 27 19:12:22 crc kubenswrapper[4981]: I0227 19:12:22.994482 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9rclp" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.000708 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.003916 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7h875"] Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.004258 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.004487 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-6sr2n" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.009983 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-68687"] Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.017746 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9rclp"] Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.025642 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-config\") pod \"dnsmasq-dns-5959f8865f-fsb96\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.025723 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-scripts\") pod \"keystone-bootstrap-9lps4\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.025782 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-credential-keys\") pod \"keystone-bootstrap-9lps4\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.025812 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-fsb96\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.025964 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-config-data\") pod \"keystone-bootstrap-9lps4\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.026001 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgqvk\" (UniqueName: \"kubernetes.io/projected/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-kube-api-access-kgqvk\") pod \"keystone-bootstrap-9lps4\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.026031 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-fernet-keys\") pod \"keystone-bootstrap-9lps4\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.026082 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-combined-ca-bundle\") pod \"keystone-bootstrap-9lps4\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.026142 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-fsb96\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.026178 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq4fs\" (UniqueName: \"kubernetes.io/projected/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-kube-api-access-nq4fs\") pod \"dnsmasq-dns-5959f8865f-fsb96\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.026207 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-fsb96\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.026230 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-dns-svc\") pod \"dnsmasq-dns-5959f8865f-fsb96\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.027386 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-dns-svc\") pod \"dnsmasq-dns-5959f8865f-fsb96\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.028459 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-config\") pod \"dnsmasq-dns-5959f8865f-fsb96\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.029179 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-fsb96\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.029371 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-fsb96\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.042118 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-fsb96\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.042131 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-config-data\") pod \"keystone-bootstrap-9lps4\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.045130 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq4fs\" (UniqueName: \"kubernetes.io/projected/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-kube-api-access-nq4fs\") pod \"dnsmasq-dns-5959f8865f-fsb96\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.045766 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgqvk\" (UniqueName: \"kubernetes.io/projected/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-kube-api-access-kgqvk\") pod \"keystone-bootstrap-9lps4\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.051210 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-fernet-keys\") pod \"keystone-bootstrap-9lps4\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.126872 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/433a9f91-dd8c-4e01-9133-fe5e143bc696-config\") pod \"neutron-db-sync-9rclp\" (UID: \"433a9f91-dd8c-4e01-9133-fe5e143bc696\") " pod="openstack/neutron-db-sync-9rclp" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.127472 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grbjq\" (UniqueName: \"kubernetes.io/projected/433a9f91-dd8c-4e01-9133-fe5e143bc696-kube-api-access-grbjq\") pod \"neutron-db-sync-9rclp\" (UID: \"433a9f91-dd8c-4e01-9133-fe5e143bc696\") " pod="openstack/neutron-db-sync-9rclp" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.127572 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-scripts\") pod \"cinder-db-sync-7h875\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " pod="openstack/cinder-db-sync-7h875" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.127664 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c-db-sync-config-data\") pod \"barbican-db-sync-68687\" (UID: \"f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c\") " pod="openstack/barbican-db-sync-68687" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.127743 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-config-data\") pod \"cinder-db-sync-7h875\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " pod="openstack/cinder-db-sync-7h875" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.127864 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-db-sync-config-data\") pod \"cinder-db-sync-7h875\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " pod="openstack/cinder-db-sync-7h875" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.127955 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-combined-ca-bundle\") pod \"cinder-db-sync-7h875\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " pod="openstack/cinder-db-sync-7h875" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.128046 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-etc-machine-id\") pod \"cinder-db-sync-7h875\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " pod="openstack/cinder-db-sync-7h875" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.128211 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c74n5\" (UniqueName: \"kubernetes.io/projected/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-kube-api-access-c74n5\") pod \"cinder-db-sync-7h875\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " pod="openstack/cinder-db-sync-7h875" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.128314 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/433a9f91-dd8c-4e01-9133-fe5e143bc696-combined-ca-bundle\") pod \"neutron-db-sync-9rclp\" (UID: \"433a9f91-dd8c-4e01-9133-fe5e143bc696\") " pod="openstack/neutron-db-sync-9rclp" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.128394 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c-combined-ca-bundle\") pod \"barbican-db-sync-68687\" (UID: \"f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c\") " pod="openstack/barbican-db-sync-68687" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.128480 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6np2\" (UniqueName: \"kubernetes.io/projected/f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c-kube-api-access-l6np2\") pod \"barbican-db-sync-68687\" (UID: \"f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c\") " pod="openstack/barbican-db-sync-68687" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.150644 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.162207 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-combined-ca-bundle\") pod \"keystone-bootstrap-9lps4\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.166552 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-scripts\") pod \"keystone-bootstrap-9lps4\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.171773 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-credential-keys\") pod \"keystone-bootstrap-9lps4\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.237686 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grbjq\" (UniqueName: \"kubernetes.io/projected/433a9f91-dd8c-4e01-9133-fe5e143bc696-kube-api-access-grbjq\") pod \"neutron-db-sync-9rclp\" (UID: \"433a9f91-dd8c-4e01-9133-fe5e143bc696\") " pod="openstack/neutron-db-sync-9rclp" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.237732 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-scripts\") pod \"cinder-db-sync-7h875\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " pod="openstack/cinder-db-sync-7h875" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.237767 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c-db-sync-config-data\") pod \"barbican-db-sync-68687\" (UID: \"f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c\") " pod="openstack/barbican-db-sync-68687" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.237782 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-config-data\") pod \"cinder-db-sync-7h875\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " pod="openstack/cinder-db-sync-7h875" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.237833 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-db-sync-config-data\") pod \"cinder-db-sync-7h875\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " pod="openstack/cinder-db-sync-7h875" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.237857 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-combined-ca-bundle\") pod \"cinder-db-sync-7h875\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " pod="openstack/cinder-db-sync-7h875" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.237879 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-etc-machine-id\") pod \"cinder-db-sync-7h875\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " pod="openstack/cinder-db-sync-7h875" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.237904 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c74n5\" (UniqueName: \"kubernetes.io/projected/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-kube-api-access-c74n5\") pod \"cinder-db-sync-7h875\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " pod="openstack/cinder-db-sync-7h875" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.237929 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/433a9f91-dd8c-4e01-9133-fe5e143bc696-combined-ca-bundle\") pod \"neutron-db-sync-9rclp\" (UID: \"433a9f91-dd8c-4e01-9133-fe5e143bc696\") " pod="openstack/neutron-db-sync-9rclp" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.237945 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c-combined-ca-bundle\") pod \"barbican-db-sync-68687\" (UID: \"f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c\") " pod="openstack/barbican-db-sync-68687" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.237962 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6np2\" (UniqueName: \"kubernetes.io/projected/f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c-kube-api-access-l6np2\") pod \"barbican-db-sync-68687\" (UID: \"f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c\") " pod="openstack/barbican-db-sync-68687" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.237988 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/433a9f91-dd8c-4e01-9133-fe5e143bc696-config\") pod \"neutron-db-sync-9rclp\" (UID: \"433a9f91-dd8c-4e01-9133-fe5e143bc696\") " pod="openstack/neutron-db-sync-9rclp" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.242707 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-etc-machine-id\") pod \"cinder-db-sync-7h875\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " pod="openstack/cinder-db-sync-7h875" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.247103 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-config-data\") pod \"cinder-db-sync-7h875\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " pod="openstack/cinder-db-sync-7h875" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.247204 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c-combined-ca-bundle\") pod \"barbican-db-sync-68687\" (UID: \"f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c\") " pod="openstack/barbican-db-sync-68687" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.248191 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-combined-ca-bundle\") pod \"cinder-db-sync-7h875\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " pod="openstack/cinder-db-sync-7h875" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.249217 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c-db-sync-config-data\") pod \"barbican-db-sync-68687\" (UID: \"f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c\") " pod="openstack/barbican-db-sync-68687" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.264112 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-db-sync-config-data\") pod \"cinder-db-sync-7h875\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " pod="openstack/cinder-db-sync-7h875" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.268523 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/433a9f91-dd8c-4e01-9133-fe5e143bc696-combined-ca-bundle\") pod \"neutron-db-sync-9rclp\" (UID: \"433a9f91-dd8c-4e01-9133-fe5e143bc696\") " pod="openstack/neutron-db-sync-9rclp" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.285678 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-scripts\") pod \"cinder-db-sync-7h875\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " pod="openstack/cinder-db-sync-7h875" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.288833 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/433a9f91-dd8c-4e01-9133-fe5e143bc696-config\") pod \"neutron-db-sync-9rclp\" (UID: \"433a9f91-dd8c-4e01-9133-fe5e143bc696\") " pod="openstack/neutron-db-sync-9rclp" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.289343 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grbjq\" (UniqueName: \"kubernetes.io/projected/433a9f91-dd8c-4e01-9133-fe5e143bc696-kube-api-access-grbjq\") pod \"neutron-db-sync-9rclp\" (UID: \"433a9f91-dd8c-4e01-9133-fe5e143bc696\") " pod="openstack/neutron-db-sync-9rclp" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.314267 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c74n5\" (UniqueName: \"kubernetes.io/projected/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-kube-api-access-c74n5\") pod \"cinder-db-sync-7h875\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " pod="openstack/cinder-db-sync-7h875" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.314371 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6np2\" (UniqueName: \"kubernetes.io/projected/f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c-kube-api-access-l6np2\") pod \"barbican-db-sync-68687\" (UID: \"f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c\") " pod="openstack/barbican-db-sync-68687" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.314856 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9rclp" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.315502 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-68687" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.363186 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8k42j"] Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.364638 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8k42j" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.368003 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hg7bb" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.375331 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.375694 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.382576 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.387817 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.397665 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.397880 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.442864 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-config-data\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.443265 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-scripts\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.443302 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-config-data\") pod \"placement-db-sync-8k42j\" (UID: \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\") " pod="openstack/placement-db-sync-8k42j" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.443328 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-combined-ca-bundle\") pod \"placement-db-sync-8k42j\" (UID: \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\") " pod="openstack/placement-db-sync-8k42j" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.443355 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ff4f098-86f0-4676-8254-f239843c7685-log-httpd\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.443378 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-scripts\") pod \"placement-db-sync-8k42j\" (UID: \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\") " pod="openstack/placement-db-sync-8k42j" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.443404 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-logs\") pod \"placement-db-sync-8k42j\" (UID: \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\") " pod="openstack/placement-db-sync-8k42j" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.443426 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4v56\" (UniqueName: \"kubernetes.io/projected/6ff4f098-86f0-4676-8254-f239843c7685-kube-api-access-w4v56\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.443483 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.443511 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ff4f098-86f0-4676-8254-f239843c7685-run-httpd\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.443694 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.443738 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw7qk\" (UniqueName: \"kubernetes.io/projected/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-kube-api-access-xw7qk\") pod \"placement-db-sync-8k42j\" (UID: \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\") " pod="openstack/placement-db-sync-8k42j" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.444009 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.446816 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8k42j"] Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.467808 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.479352 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-fsb96"] Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.490140 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-cwmgz"] Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.492220 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.507251 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-cwmgz"] Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.545603 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.545647 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ff4f098-86f0-4676-8254-f239843c7685-run-httpd\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.545674 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.545705 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw7qk\" (UniqueName: \"kubernetes.io/projected/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-kube-api-access-xw7qk\") pod \"placement-db-sync-8k42j\" (UID: \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\") " pod="openstack/placement-db-sync-8k42j" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.545775 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-config-data\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.545815 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-scripts\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.545845 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-config-data\") pod \"placement-db-sync-8k42j\" (UID: \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\") " pod="openstack/placement-db-sync-8k42j" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.545872 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-combined-ca-bundle\") pod \"placement-db-sync-8k42j\" (UID: \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\") " pod="openstack/placement-db-sync-8k42j" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.545896 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ff4f098-86f0-4676-8254-f239843c7685-log-httpd\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.545919 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-scripts\") pod \"placement-db-sync-8k42j\" (UID: \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\") " pod="openstack/placement-db-sync-8k42j" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.545946 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-logs\") pod \"placement-db-sync-8k42j\" (UID: \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\") " pod="openstack/placement-db-sync-8k42j" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.545970 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4v56\" (UniqueName: \"kubernetes.io/projected/6ff4f098-86f0-4676-8254-f239843c7685-kube-api-access-w4v56\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.549350 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ff4f098-86f0-4676-8254-f239843c7685-log-httpd\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.553004 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-logs\") pod \"placement-db-sync-8k42j\" (UID: \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\") " pod="openstack/placement-db-sync-8k42j" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.553363 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ff4f098-86f0-4676-8254-f239843c7685-run-httpd\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.558490 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.559276 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-scripts\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.559522 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-config-data\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.560551 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-scripts\") pod \"placement-db-sync-8k42j\" (UID: \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\") " pod="openstack/placement-db-sync-8k42j" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.569106 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-combined-ca-bundle\") pod \"placement-db-sync-8k42j\" (UID: \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\") " pod="openstack/placement-db-sync-8k42j" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.574518 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-config-data\") pod \"placement-db-sync-8k42j\" (UID: \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\") " pod="openstack/placement-db-sync-8k42j" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.574740 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.575178 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4v56\" (UniqueName: \"kubernetes.io/projected/6ff4f098-86f0-4676-8254-f239843c7685-kube-api-access-w4v56\") pod \"ceilometer-0\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " pod="openstack/ceilometer-0" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.588661 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw7qk\" (UniqueName: \"kubernetes.io/projected/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-kube-api-access-xw7qk\") pod \"placement-db-sync-8k42j\" (UID: \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\") " pod="openstack/placement-db-sync-8k42j" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.591824 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7h875" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.650751 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfbt8\" (UniqueName: \"kubernetes.io/projected/ec10cd6d-3b96-4c6a-acea-af517d302163-kube-api-access-dfbt8\") pod \"dnsmasq-dns-58dd9ff6bc-cwmgz\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.650841 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-cwmgz\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.650890 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-cwmgz\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.650947 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-config\") pod \"dnsmasq-dns-58dd9ff6bc-cwmgz\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.650972 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-cwmgz\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.650992 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-cwmgz\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.752904 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-cwmgz\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.752974 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-config\") pod \"dnsmasq-dns-58dd9ff6bc-cwmgz\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.753008 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-cwmgz\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.753026 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-cwmgz\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.753160 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfbt8\" (UniqueName: \"kubernetes.io/projected/ec10cd6d-3b96-4c6a-acea-af517d302163-kube-api-access-dfbt8\") pod \"dnsmasq-dns-58dd9ff6bc-cwmgz\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.753242 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-cwmgz\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.754327 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-cwmgz\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.754925 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-cwmgz\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.755521 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-config\") pod \"dnsmasq-dns-58dd9ff6bc-cwmgz\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.756115 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-cwmgz\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:23 crc kubenswrapper[4981]: I0227 19:12:23.756671 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-cwmgz\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:25 crc kubenswrapper[4981]: I0227 19:12:25.328141 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-7r9r7" podUID="f5ecc4da-0cfa-4632-8478-e48a3c8aba36" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 19:12:25 crc kubenswrapper[4981]: I0227 19:12:25.330439 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8k42j" Feb 27 19:12:25 crc kubenswrapper[4981]: I0227 19:12:25.362311 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:12:25 crc kubenswrapper[4981]: I0227 19:12:25.410654 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfbt8\" (UniqueName: \"kubernetes.io/projected/ec10cd6d-3b96-4c6a-acea-af517d302163-kube-api-access-dfbt8\") pod \"dnsmasq-dns-58dd9ff6bc-cwmgz\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:25 crc kubenswrapper[4981]: I0227 19:12:25.494650 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-fsb96"] Feb 27 19:12:25 crc kubenswrapper[4981]: I0227 19:12:25.644428 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:25 crc kubenswrapper[4981]: I0227 19:12:25.792194 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9rclp"] Feb 27 19:12:25 crc kubenswrapper[4981]: I0227 19:12:25.800988 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-68687"] Feb 27 19:12:25 crc kubenswrapper[4981]: W0227 19:12:25.844035 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7bfe5b3_5f46_4247_ba87_d3ec8fb6b44c.slice/crio-18ef712d734fbb944b4cdb7e21e3abed785221f0363bdf4e7b9f68be74270c46 WatchSource:0}: Error finding container 18ef712d734fbb944b4cdb7e21e3abed785221f0363bdf4e7b9f68be74270c46: Status 404 returned error can't find the container with id 18ef712d734fbb944b4cdb7e21e3abed785221f0363bdf4e7b9f68be74270c46 Feb 27 19:12:26 crc kubenswrapper[4981]: I0227 19:12:26.435786 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-68687" event={"ID":"f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c","Type":"ContainerStarted","Data":"18ef712d734fbb944b4cdb7e21e3abed785221f0363bdf4e7b9f68be74270c46"} Feb 27 19:12:26 crc kubenswrapper[4981]: I0227 19:12:26.437179 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-fsb96" event={"ID":"8bf1d57f-7993-40cf-9a3a-6683c1c3b284","Type":"ContainerStarted","Data":"51cc168c79058b78a6f813572be0f6007848d4bf4b48e40f306eb5fee87de110"} Feb 27 19:12:26 crc kubenswrapper[4981]: I0227 19:12:26.438008 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9rclp" event={"ID":"433a9f91-dd8c-4e01-9133-fe5e143bc696","Type":"ContainerStarted","Data":"2e468d6cc0207af5d70dd5387f901a2a7969efe055dff5c8ada69c2822754457"} Feb 27 19:12:26 crc kubenswrapper[4981]: I0227 19:12:26.556462 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9lps4"] Feb 27 19:12:26 crc kubenswrapper[4981]: I0227 19:12:26.583342 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7h875"] Feb 27 19:12:26 crc kubenswrapper[4981]: I0227 19:12:26.762149 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8k42j"] Feb 27 19:12:26 crc kubenswrapper[4981]: W0227 19:12:26.763541 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c06b80b_18d6_4fef_a1ce_2d513e9b58e6.slice/crio-6250adedbd10da3f42ff07713d26075e0ea19b539fe253e03892c890e0ac7dff WatchSource:0}: Error finding container 6250adedbd10da3f42ff07713d26075e0ea19b539fe253e03892c890e0ac7dff: Status 404 returned error can't find the container with id 6250adedbd10da3f42ff07713d26075e0ea19b539fe253e03892c890e0ac7dff Feb 27 19:12:26 crc kubenswrapper[4981]: I0227 19:12:26.775358 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:12:27 crc kubenswrapper[4981]: I0227 19:12:26.805899 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-cwmgz"] Feb 27 19:12:27 crc kubenswrapper[4981]: I0227 19:12:27.448451 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9rclp" event={"ID":"433a9f91-dd8c-4e01-9133-fe5e143bc696","Type":"ContainerStarted","Data":"31fade82185f1e83c1d90a9aa653996bc4068bb402a2e3fde43cb5775094559e"} Feb 27 19:12:27 crc kubenswrapper[4981]: I0227 19:12:27.450673 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9lps4" event={"ID":"b6e2d77c-83d4-48ee-ae41-d464689b4bfd","Type":"ContainerStarted","Data":"61a369867fdb53ff481bf9597a82f22b9db98dc118f4b7dac2daa323d445e360"} Feb 27 19:12:27 crc kubenswrapper[4981]: I0227 19:12:27.450770 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9lps4" event={"ID":"b6e2d77c-83d4-48ee-ae41-d464689b4bfd","Type":"ContainerStarted","Data":"a48557f28354d01b10b4f724bbe201c6c278faf0dbc292b6cb50cae153f4d9a2"} Feb 27 19:12:27 crc kubenswrapper[4981]: I0227 19:12:27.451988 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7h875" event={"ID":"094a0674-7bf9-4e18-9e70-8efed0ae3ac2","Type":"ContainerStarted","Data":"a91ce6bdd9f4d511c6269f4ba7c2ab85e4389ef76745b878522998318c1cd845"} Feb 27 19:12:27 crc kubenswrapper[4981]: I0227 19:12:27.455521 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ff4f098-86f0-4676-8254-f239843c7685","Type":"ContainerStarted","Data":"20cdf877bc66f784b9288876e811d2ff9e854b4a39bc5a4ebe8d81a17ba2771b"} Feb 27 19:12:27 crc kubenswrapper[4981]: I0227 19:12:27.457098 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" event={"ID":"ec10cd6d-3b96-4c6a-acea-af517d302163","Type":"ContainerStarted","Data":"0b2aae48d7e8353227aae15db0b4e56bb9c561fbbab40040ac977f138d8db7c4"} Feb 27 19:12:27 crc kubenswrapper[4981]: I0227 19:12:27.459540 4981 generic.go:334] "Generic (PLEG): container finished" podID="8bf1d57f-7993-40cf-9a3a-6683c1c3b284" containerID="86ce44b7c9bd772ffc39e3c355bc260c9ca9d0fda241d5ad87f0362780d0a7c7" exitCode=0 Feb 27 19:12:27 crc kubenswrapper[4981]: I0227 19:12:27.459700 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-fsb96" event={"ID":"8bf1d57f-7993-40cf-9a3a-6683c1c3b284","Type":"ContainerDied","Data":"86ce44b7c9bd772ffc39e3c355bc260c9ca9d0fda241d5ad87f0362780d0a7c7"} Feb 27 19:12:27 crc kubenswrapper[4981]: I0227 19:12:27.462017 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8k42j" event={"ID":"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6","Type":"ContainerStarted","Data":"6250adedbd10da3f42ff07713d26075e0ea19b539fe253e03892c890e0ac7dff"} Feb 27 19:12:27 crc kubenswrapper[4981]: I0227 19:12:27.868496 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-9rclp" podStartSLOduration=5.868469988 podStartE2EDuration="5.868469988s" podCreationTimestamp="2026-02-27 19:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:12:27.855369143 +0000 UTC m=+1647.334150303" watchObservedRunningTime="2026-02-27 19:12:27.868469988 +0000 UTC m=+1647.347251148" Feb 27 19:12:27 crc kubenswrapper[4981]: I0227 19:12:27.945693 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-9lps4" podStartSLOduration=5.945668238 podStartE2EDuration="5.945668238s" podCreationTimestamp="2026-02-27 19:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:12:27.914553545 +0000 UTC m=+1647.393334705" watchObservedRunningTime="2026-02-27 19:12:27.945668238 +0000 UTC m=+1647.424449398" Feb 27 19:12:28 crc kubenswrapper[4981]: I0227 19:12:28.248295 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:28 crc kubenswrapper[4981]: I0227 19:12:28.775385 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-config\") pod \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " Feb 27 19:12:28 crc kubenswrapper[4981]: I0227 19:12:28.775472 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-ovsdbserver-sb\") pod \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " Feb 27 19:12:28 crc kubenswrapper[4981]: I0227 19:12:28.775527 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-dns-swift-storage-0\") pod \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " Feb 27 19:12:28 crc kubenswrapper[4981]: I0227 19:12:28.775604 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-dns-svc\") pod \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " Feb 27 19:12:28 crc kubenswrapper[4981]: I0227 19:12:28.775784 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-ovsdbserver-nb\") pod \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " Feb 27 19:12:28 crc kubenswrapper[4981]: I0227 19:12:28.775830 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq4fs\" (UniqueName: \"kubernetes.io/projected/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-kube-api-access-nq4fs\") pod \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\" (UID: \"8bf1d57f-7993-40cf-9a3a-6683c1c3b284\") " Feb 27 19:12:28 crc kubenswrapper[4981]: I0227 19:12:28.823938 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-kube-api-access-nq4fs" (OuterVolumeSpecName: "kube-api-access-nq4fs") pod "8bf1d57f-7993-40cf-9a3a-6683c1c3b284" (UID: "8bf1d57f-7993-40cf-9a3a-6683c1c3b284"). InnerVolumeSpecName "kube-api-access-nq4fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:12:28 crc kubenswrapper[4981]: I0227 19:12:28.831213 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8bf1d57f-7993-40cf-9a3a-6683c1c3b284" (UID: "8bf1d57f-7993-40cf-9a3a-6683c1c3b284"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:12:28 crc kubenswrapper[4981]: I0227 19:12:28.848592 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-fsb96" event={"ID":"8bf1d57f-7993-40cf-9a3a-6683c1c3b284","Type":"ContainerDied","Data":"51cc168c79058b78a6f813572be0f6007848d4bf4b48e40f306eb5fee87de110"} Feb 27 19:12:28 crc kubenswrapper[4981]: I0227 19:12:28.848666 4981 scope.go:117] "RemoveContainer" containerID="86ce44b7c9bd772ffc39e3c355bc260c9ca9d0fda241d5ad87f0362780d0a7c7" Feb 27 19:12:28 crc kubenswrapper[4981]: I0227 19:12:28.848860 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-fsb96" Feb 27 19:12:28 crc kubenswrapper[4981]: I0227 19:12:28.862206 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8bf1d57f-7993-40cf-9a3a-6683c1c3b284" (UID: "8bf1d57f-7993-40cf-9a3a-6683c1c3b284"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:12:28 crc kubenswrapper[4981]: I0227 19:12:28.872155 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8bf1d57f-7993-40cf-9a3a-6683c1c3b284" (UID: "8bf1d57f-7993-40cf-9a3a-6683c1c3b284"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:12:28 crc kubenswrapper[4981]: I0227 19:12:28.878185 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 19:12:28 crc kubenswrapper[4981]: I0227 19:12:28.878236 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq4fs\" (UniqueName: \"kubernetes.io/projected/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-kube-api-access-nq4fs\") on node \"crc\" DevicePath \"\"" Feb 27 19:12:28 crc kubenswrapper[4981]: I0227 19:12:28.878249 4981 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 19:12:28 crc kubenswrapper[4981]: I0227 19:12:28.878260 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 19:12:28 crc kubenswrapper[4981]: I0227 19:12:28.884838 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8bf1d57f-7993-40cf-9a3a-6683c1c3b284" (UID: "8bf1d57f-7993-40cf-9a3a-6683c1c3b284"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:12:28 crc kubenswrapper[4981]: I0227 19:12:28.980841 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 19:12:29 crc kubenswrapper[4981]: I0227 19:12:29.111365 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-config" (OuterVolumeSpecName: "config") pod "8bf1d57f-7993-40cf-9a3a-6683c1c3b284" (UID: "8bf1d57f-7993-40cf-9a3a-6683c1c3b284"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:12:29 crc kubenswrapper[4981]: I0227 19:12:29.185631 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bf1d57f-7993-40cf-9a3a-6683c1c3b284-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:12:29 crc kubenswrapper[4981]: I0227 19:12:29.764330 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-fsb96"] Feb 27 19:12:29 crc kubenswrapper[4981]: I0227 19:12:29.781838 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-fsb96"] Feb 27 19:12:29 crc kubenswrapper[4981]: I0227 19:12:29.860895 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" event={"ID":"ec10cd6d-3b96-4c6a-acea-af517d302163","Type":"ContainerStarted","Data":"6e11ba2819f7ea7c3b207980fc4edee4f4e723d4d84c8a8caae9d810a3998606"} Feb 27 19:12:30 crc kubenswrapper[4981]: I0227 19:12:30.487061 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:12:30 crc kubenswrapper[4981]: I0227 19:12:30.876163 4981 generic.go:334] "Generic (PLEG): container finished" podID="ec10cd6d-3b96-4c6a-acea-af517d302163" containerID="6e11ba2819f7ea7c3b207980fc4edee4f4e723d4d84c8a8caae9d810a3998606" exitCode=0 Feb 27 19:12:30 crc kubenswrapper[4981]: I0227 19:12:30.876245 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" event={"ID":"ec10cd6d-3b96-4c6a-acea-af517d302163","Type":"ContainerDied","Data":"6e11ba2819f7ea7c3b207980fc4edee4f4e723d4d84c8a8caae9d810a3998606"} Feb 27 19:12:31 crc kubenswrapper[4981]: I0227 19:12:31.650128 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bf1d57f-7993-40cf-9a3a-6683c1c3b284" path="/var/lib/kubelet/pods/8bf1d57f-7993-40cf-9a3a-6683c1c3b284/volumes" Feb 27 19:12:32 crc kubenswrapper[4981]: I0227 19:12:32.917635 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" event={"ID":"ec10cd6d-3b96-4c6a-acea-af517d302163","Type":"ContainerStarted","Data":"a85cb7ace08e7d452c06b0b251fb3dabcbb9ca5e92634e53fe105efb7f3278ed"} Feb 27 19:12:32 crc kubenswrapper[4981]: I0227 19:12:32.918168 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:32 crc kubenswrapper[4981]: I0227 19:12:32.950779 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" podStartSLOduration=9.950753045999999 podStartE2EDuration="9.950753046s" podCreationTimestamp="2026-02-27 19:12:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:12:32.941781538 +0000 UTC m=+1652.420562698" watchObservedRunningTime="2026-02-27 19:12:32.950753046 +0000 UTC m=+1652.429534206" Feb 27 19:12:34 crc kubenswrapper[4981]: I0227 19:12:34.629467 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:12:34 crc kubenswrapper[4981]: E0227 19:12:34.630249 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:12:36 crc kubenswrapper[4981]: I0227 19:12:36.218333 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-rb599" podUID="2557a2d1-c08e-4a0a-b04e-a05aacf26465" containerName="registry-server" probeResult="failure" output=< Feb 27 19:12:36 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Feb 27 19:12:36 crc kubenswrapper[4981]: > Feb 27 19:12:36 crc kubenswrapper[4981]: I0227 19:12:36.241613 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-rb599" podUID="2557a2d1-c08e-4a0a-b04e-a05aacf26465" containerName="registry-server" probeResult="failure" output=< Feb 27 19:12:36 crc kubenswrapper[4981]: timeout: health rpc did not complete within 1s Feb 27 19:12:36 crc kubenswrapper[4981]: > Feb 27 19:12:40 crc kubenswrapper[4981]: I0227 19:12:40.646204 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:12:40 crc kubenswrapper[4981]: I0227 19:12:40.721364 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-kcc2s"] Feb 27 19:12:40 crc kubenswrapper[4981]: I0227 19:12:40.721745 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerName="dnsmasq-dns" containerID="cri-o://17b63b7be8f3e351a48a0cab6cd2c2ec27e903a8a8d02a97740b2891c4da7407" gracePeriod=10 Feb 27 19:12:45 crc kubenswrapper[4981]: I0227 19:12:45.329373 4981 generic.go:334] "Generic (PLEG): container finished" podID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerID="17b63b7be8f3e351a48a0cab6cd2c2ec27e903a8a8d02a97740b2891c4da7407" exitCode=0 Feb 27 19:12:45 crc kubenswrapper[4981]: I0227 19:12:45.329437 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" event={"ID":"ad5174ae-aa09-4234-b2dc-69d19d951501","Type":"ContainerDied","Data":"17b63b7be8f3e351a48a0cab6cd2c2ec27e903a8a8d02a97740b2891c4da7407"} Feb 27 19:12:45 crc kubenswrapper[4981]: I0227 19:12:45.435018 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Feb 27 19:12:47 crc kubenswrapper[4981]: I0227 19:12:47.629335 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:12:47 crc kubenswrapper[4981]: E0227 19:12:47.630396 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:12:51 crc kubenswrapper[4981]: I0227 19:12:51.395175 4981 generic.go:334] "Generic (PLEG): container finished" podID="b6e2d77c-83d4-48ee-ae41-d464689b4bfd" containerID="61a369867fdb53ff481bf9597a82f22b9db98dc118f4b7dac2daa323d445e360" exitCode=0 Feb 27 19:12:51 crc kubenswrapper[4981]: I0227 19:12:51.395268 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9lps4" event={"ID":"b6e2d77c-83d4-48ee-ae41-d464689b4bfd","Type":"ContainerDied","Data":"61a369867fdb53ff481bf9597a82f22b9db98dc118f4b7dac2daa323d445e360"} Feb 27 19:12:54 crc kubenswrapper[4981]: I0227 19:12:54.614610 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z6wdt"] Feb 27 19:12:54 crc kubenswrapper[4981]: E0227 19:12:54.615980 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf1d57f-7993-40cf-9a3a-6683c1c3b284" containerName="init" Feb 27 19:12:54 crc kubenswrapper[4981]: I0227 19:12:54.615995 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf1d57f-7993-40cf-9a3a-6683c1c3b284" containerName="init" Feb 27 19:12:54 crc kubenswrapper[4981]: I0227 19:12:54.616194 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf1d57f-7993-40cf-9a3a-6683c1c3b284" containerName="init" Feb 27 19:12:54 crc kubenswrapper[4981]: I0227 19:12:54.617888 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6wdt" Feb 27 19:12:54 crc kubenswrapper[4981]: I0227 19:12:54.630156 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z6wdt"] Feb 27 19:12:54 crc kubenswrapper[4981]: I0227 19:12:54.665378 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09fc44ca-39ea-428a-b743-728f222a63b9-catalog-content\") pod \"certified-operators-z6wdt\" (UID: \"09fc44ca-39ea-428a-b743-728f222a63b9\") " pod="openshift-marketplace/certified-operators-z6wdt" Feb 27 19:12:54 crc kubenswrapper[4981]: I0227 19:12:54.665845 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sszw\" (UniqueName: \"kubernetes.io/projected/09fc44ca-39ea-428a-b743-728f222a63b9-kube-api-access-8sszw\") pod \"certified-operators-z6wdt\" (UID: \"09fc44ca-39ea-428a-b743-728f222a63b9\") " pod="openshift-marketplace/certified-operators-z6wdt" Feb 27 19:12:54 crc kubenswrapper[4981]: I0227 19:12:54.665893 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09fc44ca-39ea-428a-b743-728f222a63b9-utilities\") pod \"certified-operators-z6wdt\" (UID: \"09fc44ca-39ea-428a-b743-728f222a63b9\") " pod="openshift-marketplace/certified-operators-z6wdt" Feb 27 19:12:54 crc kubenswrapper[4981]: I0227 19:12:54.768116 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sszw\" (UniqueName: \"kubernetes.io/projected/09fc44ca-39ea-428a-b743-728f222a63b9-kube-api-access-8sszw\") pod \"certified-operators-z6wdt\" (UID: \"09fc44ca-39ea-428a-b743-728f222a63b9\") " pod="openshift-marketplace/certified-operators-z6wdt" Feb 27 19:12:54 crc kubenswrapper[4981]: I0227 19:12:54.768199 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09fc44ca-39ea-428a-b743-728f222a63b9-utilities\") pod \"certified-operators-z6wdt\" (UID: \"09fc44ca-39ea-428a-b743-728f222a63b9\") " pod="openshift-marketplace/certified-operators-z6wdt" Feb 27 19:12:54 crc kubenswrapper[4981]: I0227 19:12:54.768295 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09fc44ca-39ea-428a-b743-728f222a63b9-catalog-content\") pod \"certified-operators-z6wdt\" (UID: \"09fc44ca-39ea-428a-b743-728f222a63b9\") " pod="openshift-marketplace/certified-operators-z6wdt" Feb 27 19:12:54 crc kubenswrapper[4981]: I0227 19:12:54.768942 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09fc44ca-39ea-428a-b743-728f222a63b9-catalog-content\") pod \"certified-operators-z6wdt\" (UID: \"09fc44ca-39ea-428a-b743-728f222a63b9\") " pod="openshift-marketplace/certified-operators-z6wdt" Feb 27 19:12:54 crc kubenswrapper[4981]: I0227 19:12:54.768978 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09fc44ca-39ea-428a-b743-728f222a63b9-utilities\") pod \"certified-operators-z6wdt\" (UID: \"09fc44ca-39ea-428a-b743-728f222a63b9\") " pod="openshift-marketplace/certified-operators-z6wdt" Feb 27 19:12:54 crc kubenswrapper[4981]: I0227 19:12:54.802983 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sszw\" (UniqueName: \"kubernetes.io/projected/09fc44ca-39ea-428a-b743-728f222a63b9-kube-api-access-8sszw\") pod \"certified-operators-z6wdt\" (UID: \"09fc44ca-39ea-428a-b743-728f222a63b9\") " pod="openshift-marketplace/certified-operators-z6wdt" Feb 27 19:12:54 crc kubenswrapper[4981]: I0227 19:12:54.952269 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6wdt" Feb 27 19:12:55 crc kubenswrapper[4981]: I0227 19:12:55.437009 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: i/o timeout" Feb 27 19:12:58 crc kubenswrapper[4981]: I0227 19:12:58.629837 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:12:58 crc kubenswrapper[4981]: E0227 19:12:58.630769 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:13:00 crc kubenswrapper[4981]: I0227 19:13:00.437591 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: i/o timeout" Feb 27 19:13:00 crc kubenswrapper[4981]: I0227 19:13:00.439419 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:13:05 crc kubenswrapper[4981]: I0227 19:13:05.440213 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: i/o timeout" Feb 27 19:13:10 crc kubenswrapper[4981]: I0227 19:13:10.441977 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: i/o timeout" Feb 27 19:13:12 crc kubenswrapper[4981]: I0227 19:13:12.629294 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:13:12 crc kubenswrapper[4981]: E0227 19:13:12.631376 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:13:15 crc kubenswrapper[4981]: I0227 19:13:15.442533 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: i/o timeout" Feb 27 19:13:20 crc kubenswrapper[4981]: I0227 19:13:20.443259 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: i/o timeout" Feb 27 19:13:25 crc kubenswrapper[4981]: I0227 19:13:25.444527 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: i/o timeout" Feb 27 19:13:26 crc kubenswrapper[4981]: I0227 19:13:26.629814 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:13:26 crc kubenswrapper[4981]: E0227 19:13:26.631373 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:13:30 crc kubenswrapper[4981]: I0227 19:13:30.445786 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: i/o timeout" Feb 27 19:13:35 crc kubenswrapper[4981]: I0227 19:13:35.451281 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: i/o timeout" Feb 27 19:13:39 crc kubenswrapper[4981]: I0227 19:13:39.632600 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:13:39 crc kubenswrapper[4981]: E0227 19:13:39.633498 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:13:40 crc kubenswrapper[4981]: I0227 19:13:40.454474 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: i/o timeout" Feb 27 19:13:45 crc kubenswrapper[4981]: I0227 19:13:45.455542 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: i/o timeout" Feb 27 19:13:50 crc kubenswrapper[4981]: I0227 19:13:50.459788 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: i/o timeout" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.017365 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.024830 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.103569 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9lps4" event={"ID":"b6e2d77c-83d4-48ee-ae41-d464689b4bfd","Type":"ContainerDied","Data":"a48557f28354d01b10b4f724bbe201c6c278faf0dbc292b6cb50cae153f4d9a2"} Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.103644 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a48557f28354d01b10b4f724bbe201c6c278faf0dbc292b6cb50cae153f4d9a2" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.103805 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9lps4" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.105418 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" event={"ID":"ad5174ae-aa09-4234-b2dc-69d19d951501","Type":"ContainerDied","Data":"5f6fb0362a7f7716a420007e3aaa0936c9d35f7506bc28ee1ba56c9b47bf250b"} Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.105461 4981 scope.go:117] "RemoveContainer" containerID="17b63b7be8f3e351a48a0cab6cd2c2ec27e903a8a8d02a97740b2891c4da7407" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.105605 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.121316 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-config-data\") pod \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.121450 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-scripts\") pod \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.121480 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-fernet-keys\") pod \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.122259 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-ovsdbserver-nb\") pod \"ad5174ae-aa09-4234-b2dc-69d19d951501\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.122289 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-credential-keys\") pod \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.122728 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-config\") pod \"ad5174ae-aa09-4234-b2dc-69d19d951501\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.122770 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-dns-svc\") pod \"ad5174ae-aa09-4234-b2dc-69d19d951501\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.122834 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgqvk\" (UniqueName: \"kubernetes.io/projected/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-kube-api-access-kgqvk\") pod \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.122858 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-ovsdbserver-sb\") pod \"ad5174ae-aa09-4234-b2dc-69d19d951501\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.122904 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-dns-swift-storage-0\") pod \"ad5174ae-aa09-4234-b2dc-69d19d951501\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.122961 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-combined-ca-bundle\") pod \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\" (UID: \"b6e2d77c-83d4-48ee-ae41-d464689b4bfd\") " Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.122998 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kpnf\" (UniqueName: \"kubernetes.io/projected/ad5174ae-aa09-4234-b2dc-69d19d951501-kube-api-access-7kpnf\") pod \"ad5174ae-aa09-4234-b2dc-69d19d951501\" (UID: \"ad5174ae-aa09-4234-b2dc-69d19d951501\") " Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.136376 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-scripts" (OuterVolumeSpecName: "scripts") pod "b6e2d77c-83d4-48ee-ae41-d464689b4bfd" (UID: "b6e2d77c-83d4-48ee-ae41-d464689b4bfd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.137512 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.151274 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-kube-api-access-kgqvk" (OuterVolumeSpecName: "kube-api-access-kgqvk") pod "b6e2d77c-83d4-48ee-ae41-d464689b4bfd" (UID: "b6e2d77c-83d4-48ee-ae41-d464689b4bfd"). InnerVolumeSpecName "kube-api-access-kgqvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.153203 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-config-data" (OuterVolumeSpecName: "config-data") pod "b6e2d77c-83d4-48ee-ae41-d464689b4bfd" (UID: "b6e2d77c-83d4-48ee-ae41-d464689b4bfd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.161278 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b6e2d77c-83d4-48ee-ae41-d464689b4bfd" (UID: "b6e2d77c-83d4-48ee-ae41-d464689b4bfd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.168226 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b6e2d77c-83d4-48ee-ae41-d464689b4bfd" (UID: "b6e2d77c-83d4-48ee-ae41-d464689b4bfd"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.178586 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad5174ae-aa09-4234-b2dc-69d19d951501-kube-api-access-7kpnf" (OuterVolumeSpecName: "kube-api-access-7kpnf") pod "ad5174ae-aa09-4234-b2dc-69d19d951501" (UID: "ad5174ae-aa09-4234-b2dc-69d19d951501"). InnerVolumeSpecName "kube-api-access-7kpnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.217597 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ad5174ae-aa09-4234-b2dc-69d19d951501" (UID: "ad5174ae-aa09-4234-b2dc-69d19d951501"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.232618 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ad5174ae-aa09-4234-b2dc-69d19d951501" (UID: "ad5174ae-aa09-4234-b2dc-69d19d951501"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.247659 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kpnf\" (UniqueName: \"kubernetes.io/projected/ad5174ae-aa09-4234-b2dc-69d19d951501-kube-api-access-7kpnf\") on node \"crc\" DevicePath \"\"" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.247709 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.247724 4981 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.247735 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.247747 4981 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.247760 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgqvk\" (UniqueName: \"kubernetes.io/projected/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-kube-api-access-kgqvk\") on node \"crc\" DevicePath \"\"" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.247772 4981 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.251222 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ad5174ae-aa09-4234-b2dc-69d19d951501" (UID: "ad5174ae-aa09-4234-b2dc-69d19d951501"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.284402 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ad5174ae-aa09-4234-b2dc-69d19d951501" (UID: "ad5174ae-aa09-4234-b2dc-69d19d951501"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.290761 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-config" (OuterVolumeSpecName: "config") pod "ad5174ae-aa09-4234-b2dc-69d19d951501" (UID: "ad5174ae-aa09-4234-b2dc-69d19d951501"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.292277 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6e2d77c-83d4-48ee-ae41-d464689b4bfd" (UID: "b6e2d77c-83d4-48ee-ae41-d464689b4bfd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.349354 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6e2d77c-83d4-48ee-ae41-d464689b4bfd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.349395 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.349408 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.349416 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ad5174ae-aa09-4234-b2dc-69d19d951501-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.443734 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-kcc2s"] Feb 27 19:13:52 crc kubenswrapper[4981]: I0227 19:13:52.451209 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-kcc2s"] Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.214099 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-9lps4"] Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.223711 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-9lps4"] Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.314112 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ncckq"] Feb 27 19:13:53 crc kubenswrapper[4981]: E0227 19:13:53.314626 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerName="init" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.314646 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerName="init" Feb 27 19:13:53 crc kubenswrapper[4981]: E0227 19:13:53.314666 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerName="dnsmasq-dns" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.314675 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerName="dnsmasq-dns" Feb 27 19:13:53 crc kubenswrapper[4981]: E0227 19:13:53.314702 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6e2d77c-83d4-48ee-ae41-d464689b4bfd" containerName="keystone-bootstrap" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.314709 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6e2d77c-83d4-48ee-ae41-d464689b4bfd" containerName="keystone-bootstrap" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.314954 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6e2d77c-83d4-48ee-ae41-d464689b4bfd" containerName="keystone-bootstrap" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.314984 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerName="dnsmasq-dns" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.315656 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.319500 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.319761 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4zdz2" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.320170 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.320417 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.320613 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.323309 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ncckq"] Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.471966 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-scripts\") pod \"keystone-bootstrap-ncckq\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.472029 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-credential-keys\") pod \"keystone-bootstrap-ncckq\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.472101 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-combined-ca-bundle\") pod \"keystone-bootstrap-ncckq\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.472127 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-config-data\") pod \"keystone-bootstrap-ncckq\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.472176 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5q8l\" (UniqueName: \"kubernetes.io/projected/5640be3b-ba9b-4530-8bf8-595f0428c3ee-kube-api-access-d5q8l\") pod \"keystone-bootstrap-ncckq\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.472394 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-fernet-keys\") pod \"keystone-bootstrap-ncckq\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.574973 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-fernet-keys\") pod \"keystone-bootstrap-ncckq\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.575175 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-scripts\") pod \"keystone-bootstrap-ncckq\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.575269 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-credential-keys\") pod \"keystone-bootstrap-ncckq\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.575382 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-combined-ca-bundle\") pod \"keystone-bootstrap-ncckq\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.575432 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-config-data\") pod \"keystone-bootstrap-ncckq\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.575528 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5q8l\" (UniqueName: \"kubernetes.io/projected/5640be3b-ba9b-4530-8bf8-595f0428c3ee-kube-api-access-d5q8l\") pod \"keystone-bootstrap-ncckq\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.581155 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-combined-ca-bundle\") pod \"keystone-bootstrap-ncckq\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.581527 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-scripts\") pod \"keystone-bootstrap-ncckq\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.581724 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-config-data\") pod \"keystone-bootstrap-ncckq\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.581911 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-credential-keys\") pod \"keystone-bootstrap-ncckq\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.583661 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-fernet-keys\") pod \"keystone-bootstrap-ncckq\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.593775 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5q8l\" (UniqueName: \"kubernetes.io/projected/5640be3b-ba9b-4530-8bf8-595f0428c3ee-kube-api-access-d5q8l\") pod \"keystone-bootstrap-ncckq\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.634129 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.643718 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" path="/var/lib/kubelet/pods/ad5174ae-aa09-4234-b2dc-69d19d951501/volumes" Feb 27 19:13:53 crc kubenswrapper[4981]: I0227 19:13:53.645531 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6e2d77c-83d4-48ee-ae41-d464689b4bfd" path="/var/lib/kubelet/pods/b6e2d77c-83d4-48ee-ae41-d464689b4bfd/volumes" Feb 27 19:13:54 crc kubenswrapper[4981]: I0227 19:13:54.629912 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:13:54 crc kubenswrapper[4981]: E0227 19:13:54.630764 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:13:55 crc kubenswrapper[4981]: I0227 19:13:55.460278 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-kcc2s" podUID="ad5174ae-aa09-4234-b2dc-69d19d951501" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: i/o timeout" Feb 27 19:14:00 crc kubenswrapper[4981]: I0227 19:14:00.150492 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536994-kg6pv"] Feb 27 19:14:00 crc kubenswrapper[4981]: I0227 19:14:00.152989 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536994-kg6pv" Feb 27 19:14:00 crc kubenswrapper[4981]: I0227 19:14:00.155896 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:14:00 crc kubenswrapper[4981]: I0227 19:14:00.156145 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:14:00 crc kubenswrapper[4981]: I0227 19:14:00.156315 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:14:00 crc kubenswrapper[4981]: I0227 19:14:00.171866 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536994-kg6pv"] Feb 27 19:14:00 crc kubenswrapper[4981]: I0227 19:14:00.318383 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8zf7\" (UniqueName: \"kubernetes.io/projected/91e1417f-019e-484a-afd2-05ae98b58cee-kube-api-access-p8zf7\") pod \"auto-csr-approver-29536994-kg6pv\" (UID: \"91e1417f-019e-484a-afd2-05ae98b58cee\") " pod="openshift-infra/auto-csr-approver-29536994-kg6pv" Feb 27 19:14:00 crc kubenswrapper[4981]: I0227 19:14:00.420251 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8zf7\" (UniqueName: \"kubernetes.io/projected/91e1417f-019e-484a-afd2-05ae98b58cee-kube-api-access-p8zf7\") pod \"auto-csr-approver-29536994-kg6pv\" (UID: \"91e1417f-019e-484a-afd2-05ae98b58cee\") " pod="openshift-infra/auto-csr-approver-29536994-kg6pv" Feb 27 19:14:00 crc kubenswrapper[4981]: I0227 19:14:00.448633 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8zf7\" (UniqueName: \"kubernetes.io/projected/91e1417f-019e-484a-afd2-05ae98b58cee-kube-api-access-p8zf7\") pod \"auto-csr-approver-29536994-kg6pv\" (UID: \"91e1417f-019e-484a-afd2-05ae98b58cee\") " pod="openshift-infra/auto-csr-approver-29536994-kg6pv" Feb 27 19:14:00 crc kubenswrapper[4981]: I0227 19:14:00.480543 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536994-kg6pv" Feb 27 19:14:08 crc kubenswrapper[4981]: E0227 19:14:08.874368 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 27 19:14:08 crc kubenswrapper[4981]: E0227 19:14:08.875244 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xw7qk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-8k42j_openstack(8c06b80b-18d6-4fef-a1ce-2d513e9b58e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:14:08 crc kubenswrapper[4981]: E0227 19:14:08.876902 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-8k42j" podUID="8c06b80b-18d6-4fef-a1ce-2d513e9b58e6" Feb 27 19:14:09 crc kubenswrapper[4981]: E0227 19:14:09.275559 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-8k42j" podUID="8c06b80b-18d6-4fef-a1ce-2d513e9b58e6" Feb 27 19:14:09 crc kubenswrapper[4981]: I0227 19:14:09.629221 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:14:09 crc kubenswrapper[4981]: E0227 19:14:09.629476 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:14:18 crc kubenswrapper[4981]: I0227 19:14:18.161523 4981 scope.go:117] "RemoveContainer" containerID="6b54c45daadaacd78170538a14fa9845b30552324d351b3d6a41e8d1204afad7" Feb 27 19:14:44 crc kubenswrapper[4981]: I0227 19:14:23.629049 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:14:44 crc kubenswrapper[4981]: E0227 19:14:23.630019 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:14:44 crc kubenswrapper[4981]: I0227 19:14:24.962043 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ff4f098-86f0-4676-8254-f239843c7685","Type":"ContainerStarted","Data":"4c5f12a7c4cf24be30b08af6ae2a985fa57fd2f7fb324c623e373af7473f4803"} Feb 27 19:14:44 crc kubenswrapper[4981]: I0227 19:14:24.963688 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-68687" event={"ID":"f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c","Type":"ContainerStarted","Data":"94f7cad0d48ab4cdb999663cdeab7da040c74451f9b64c26617c577f369d2053"} Feb 27 19:14:44 crc kubenswrapper[4981]: I0227 19:14:29.020096 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-68687" podStartSLOduration=14.759312635 podStartE2EDuration="2m7.020077277s" podCreationTimestamp="2026-02-27 19:12:22 +0000 UTC" firstStartedPulling="2026-02-27 19:12:25.878834357 +0000 UTC m=+1645.357615517" lastFinishedPulling="2026-02-27 19:14:18.139598999 +0000 UTC m=+1757.618380159" observedRunningTime="2026-02-27 19:14:29.01725765 +0000 UTC m=+1768.496038810" watchObservedRunningTime="2026-02-27 19:14:29.020077277 +0000 UTC m=+1768.498858437" Feb 27 19:14:44 crc kubenswrapper[4981]: I0227 19:14:29.389469 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-bvts5" podUID="7cbe4d2e-bd57-452d-b873-709e1de024e7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 19:14:44 crc kubenswrapper[4981]: E0227 19:14:29.981751 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 27 19:14:44 crc kubenswrapper[4981]: E0227 19:14:29.981932 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c74n5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-7h875_openstack(094a0674-7bf9-4e18-9e70-8efed0ae3ac2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:14:44 crc kubenswrapper[4981]: E0227 19:14:29.983198 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-7h875" podUID="094a0674-7bf9-4e18-9e70-8efed0ae3ac2" Feb 27 19:14:44 crc kubenswrapper[4981]: E0227 19:14:30.019799 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-7h875" podUID="094a0674-7bf9-4e18-9e70-8efed0ae3ac2" Feb 27 19:14:44 crc kubenswrapper[4981]: I0227 19:14:34.629321 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:14:44 crc kubenswrapper[4981]: E0227 19:14:34.629862 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:14:44 crc kubenswrapper[4981]: I0227 19:14:42.633318 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:14:45 crc kubenswrapper[4981]: I0227 19:14:45.080718 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ncckq"] Feb 27 19:14:45 crc kubenswrapper[4981]: I0227 19:14:45.120876 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z6wdt"] Feb 27 19:14:45 crc kubenswrapper[4981]: I0227 19:14:45.252485 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536994-kg6pv"] Feb 27 19:14:45 crc kubenswrapper[4981]: W0227 19:14:45.988317 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5640be3b_ba9b_4530_8bf8_595f0428c3ee.slice/crio-af684c0980c28dc3bb84c1d5b88876178736e72706aa776697aaea5b78b1d202 WatchSource:0}: Error finding container af684c0980c28dc3bb84c1d5b88876178736e72706aa776697aaea5b78b1d202: Status 404 returned error can't find the container with id af684c0980c28dc3bb84c1d5b88876178736e72706aa776697aaea5b78b1d202 Feb 27 19:14:46 crc kubenswrapper[4981]: I0227 19:14:46.235738 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536994-kg6pv" event={"ID":"91e1417f-019e-484a-afd2-05ae98b58cee","Type":"ContainerStarted","Data":"da15b785863fdd56ef7f284557d94f5d3bb3b0999bf4fd284578380d3f3a7093"} Feb 27 19:14:46 crc kubenswrapper[4981]: I0227 19:14:46.237380 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ncckq" event={"ID":"5640be3b-ba9b-4530-8bf8-595f0428c3ee","Type":"ContainerStarted","Data":"af684c0980c28dc3bb84c1d5b88876178736e72706aa776697aaea5b78b1d202"} Feb 27 19:14:46 crc kubenswrapper[4981]: I0227 19:14:46.238279 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6wdt" event={"ID":"09fc44ca-39ea-428a-b743-728f222a63b9","Type":"ContainerStarted","Data":"03aa956917165b3b8c9c22c2a08f0a8bcf36f914b15127f366cb3bada9a484aa"} Feb 27 19:14:46 crc kubenswrapper[4981]: I0227 19:14:46.629208 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:14:46 crc kubenswrapper[4981]: E0227 19:14:46.629479 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:14:47 crc kubenswrapper[4981]: I0227 19:14:47.579448 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 27 19:14:52 crc kubenswrapper[4981]: I0227 19:14:52.295097 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8k42j" event={"ID":"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6","Type":"ContainerStarted","Data":"cce6d6c81c960fa6f31644866b722a77872ba205b67bf733c80144cc6d6e3dfc"} Feb 27 19:14:52 crc kubenswrapper[4981]: I0227 19:14:52.299239 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ncckq" event={"ID":"5640be3b-ba9b-4530-8bf8-595f0428c3ee","Type":"ContainerStarted","Data":"4e3e729a2f7c99f11756596ccfffce4dcd67e9c1cabd75f3b556e598bd9ab27b"} Feb 27 19:14:52 crc kubenswrapper[4981]: I0227 19:14:52.301287 4981 generic.go:334] "Generic (PLEG): container finished" podID="09fc44ca-39ea-428a-b743-728f222a63b9" containerID="fe15e42c3a6b49bc2586216aa1e03e38a614b95e35172d49e387b52e6069d49c" exitCode=0 Feb 27 19:14:52 crc kubenswrapper[4981]: I0227 19:14:52.301331 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6wdt" event={"ID":"09fc44ca-39ea-428a-b743-728f222a63b9","Type":"ContainerDied","Data":"fe15e42c3a6b49bc2586216aa1e03e38a614b95e35172d49e387b52e6069d49c"} Feb 27 19:14:52 crc kubenswrapper[4981]: I0227 19:14:52.334344 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8k42j" podStartSLOduration=4.572970496 podStartE2EDuration="2m29.33430901s" podCreationTimestamp="2026-02-27 19:12:23 +0000 UTC" firstStartedPulling="2026-02-27 19:12:26.766046813 +0000 UTC m=+1646.244827973" lastFinishedPulling="2026-02-27 19:14:51.527385327 +0000 UTC m=+1791.006166487" observedRunningTime="2026-02-27 19:14:52.314874068 +0000 UTC m=+1791.793655228" watchObservedRunningTime="2026-02-27 19:14:52.33430901 +0000 UTC m=+1791.813090170" Feb 27 19:14:53 crc kubenswrapper[4981]: I0227 19:14:53.329241 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ncckq" podStartSLOduration=60.329204933 podStartE2EDuration="1m0.329204933s" podCreationTimestamp="2026-02-27 19:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:14:53.32621473 +0000 UTC m=+1792.804995900" watchObservedRunningTime="2026-02-27 19:14:53.329204933 +0000 UTC m=+1792.807986093" Feb 27 19:14:54 crc kubenswrapper[4981]: I0227 19:14:54.320464 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7h875" event={"ID":"094a0674-7bf9-4e18-9e70-8efed0ae3ac2","Type":"ContainerStarted","Data":"5bb3f4702c947998436733cc6cc2c2d7567ac9f3091ff61fd1bc793151cba664"} Feb 27 19:14:54 crc kubenswrapper[4981]: I0227 19:14:54.344959 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7h875" podStartSLOduration=7.36233554 podStartE2EDuration="2m32.34493173s" podCreationTimestamp="2026-02-27 19:12:22 +0000 UTC" firstStartedPulling="2026-02-27 19:12:26.565611471 +0000 UTC m=+1646.044392631" lastFinishedPulling="2026-02-27 19:14:51.548207661 +0000 UTC m=+1791.026988821" observedRunningTime="2026-02-27 19:14:54.3416916 +0000 UTC m=+1793.820472760" watchObservedRunningTime="2026-02-27 19:14:54.34493173 +0000 UTC m=+1793.823712890" Feb 27 19:14:55 crc kubenswrapper[4981]: I0227 19:14:55.331887 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ff4f098-86f0-4676-8254-f239843c7685","Type":"ContainerStarted","Data":"0f3ddb0627c0a4ba824a5eb65b289ef026e7cb41e90aa1877237d9e381f2c8e0"} Feb 27 19:14:56 crc kubenswrapper[4981]: I0227 19:14:56.342317 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536994-kg6pv" event={"ID":"91e1417f-019e-484a-afd2-05ae98b58cee","Type":"ContainerStarted","Data":"a3c5cca3e8149e88be34c021f45707aad26587070262895945e7ea19c52e2d2b"} Feb 27 19:14:56 crc kubenswrapper[4981]: I0227 19:14:56.357346 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536994-kg6pv" podStartSLOduration=48.816016571 podStartE2EDuration="56.357327536s" podCreationTimestamp="2026-02-27 19:14:00 +0000 UTC" firstStartedPulling="2026-02-27 19:14:47.575088341 +0000 UTC m=+1787.053869501" lastFinishedPulling="2026-02-27 19:14:55.116399306 +0000 UTC m=+1794.595180466" observedRunningTime="2026-02-27 19:14:56.356210142 +0000 UTC m=+1795.834991302" watchObservedRunningTime="2026-02-27 19:14:56.357327536 +0000 UTC m=+1795.836108696" Feb 27 19:14:57 crc kubenswrapper[4981]: I0227 19:14:57.353290 4981 generic.go:334] "Generic (PLEG): container finished" podID="91e1417f-019e-484a-afd2-05ae98b58cee" containerID="a3c5cca3e8149e88be34c021f45707aad26587070262895945e7ea19c52e2d2b" exitCode=0 Feb 27 19:14:57 crc kubenswrapper[4981]: I0227 19:14:57.353395 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536994-kg6pv" event={"ID":"91e1417f-019e-484a-afd2-05ae98b58cee","Type":"ContainerDied","Data":"a3c5cca3e8149e88be34c021f45707aad26587070262895945e7ea19c52e2d2b"} Feb 27 19:14:58 crc kubenswrapper[4981]: I0227 19:14:58.365270 4981 generic.go:334] "Generic (PLEG): container finished" podID="09fc44ca-39ea-428a-b743-728f222a63b9" containerID="b0fd7ac9278fce0a9c1c3348a852e2ff7d0f986c32cb49e3a478ebb3086bc72a" exitCode=0 Feb 27 19:14:58 crc kubenswrapper[4981]: I0227 19:14:58.365302 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6wdt" event={"ID":"09fc44ca-39ea-428a-b743-728f222a63b9","Type":"ContainerDied","Data":"b0fd7ac9278fce0a9c1c3348a852e2ff7d0f986c32cb49e3a478ebb3086bc72a"} Feb 27 19:14:58 crc kubenswrapper[4981]: I0227 19:14:58.680884 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536994-kg6pv" Feb 27 19:14:58 crc kubenswrapper[4981]: I0227 19:14:58.772966 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8zf7\" (UniqueName: \"kubernetes.io/projected/91e1417f-019e-484a-afd2-05ae98b58cee-kube-api-access-p8zf7\") pod \"91e1417f-019e-484a-afd2-05ae98b58cee\" (UID: \"91e1417f-019e-484a-afd2-05ae98b58cee\") " Feb 27 19:14:58 crc kubenswrapper[4981]: I0227 19:14:58.780807 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91e1417f-019e-484a-afd2-05ae98b58cee-kube-api-access-p8zf7" (OuterVolumeSpecName: "kube-api-access-p8zf7") pod "91e1417f-019e-484a-afd2-05ae98b58cee" (UID: "91e1417f-019e-484a-afd2-05ae98b58cee"). InnerVolumeSpecName "kube-api-access-p8zf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:14:58 crc kubenswrapper[4981]: I0227 19:14:58.875337 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8zf7\" (UniqueName: \"kubernetes.io/projected/91e1417f-019e-484a-afd2-05ae98b58cee-kube-api-access-p8zf7\") on node \"crc\" DevicePath \"\"" Feb 27 19:14:59 crc kubenswrapper[4981]: I0227 19:14:59.388539 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536994-kg6pv" event={"ID":"91e1417f-019e-484a-afd2-05ae98b58cee","Type":"ContainerDied","Data":"da15b785863fdd56ef7f284557d94f5d3bb3b0999bf4fd284578380d3f3a7093"} Feb 27 19:14:59 crc kubenswrapper[4981]: I0227 19:14:59.388883 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da15b785863fdd56ef7f284557d94f5d3bb3b0999bf4fd284578380d3f3a7093" Feb 27 19:14:59 crc kubenswrapper[4981]: I0227 19:14:59.388771 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536994-kg6pv" Feb 27 19:14:59 crc kubenswrapper[4981]: I0227 19:14:59.449654 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536988-bmwsv"] Feb 27 19:14:59 crc kubenswrapper[4981]: I0227 19:14:59.475239 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536988-bmwsv"] Feb 27 19:14:59 crc kubenswrapper[4981]: I0227 19:14:59.629799 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:14:59 crc kubenswrapper[4981]: E0227 19:14:59.630138 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:14:59 crc kubenswrapper[4981]: I0227 19:14:59.640892 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5437f2b0-3e2f-434e-a1b2-a152345065a5" path="/var/lib/kubelet/pods/5437f2b0-3e2f-434e-a1b2-a152345065a5/volumes" Feb 27 19:15:00 crc kubenswrapper[4981]: I0227 19:15:00.174830 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536995-lvlv5"] Feb 27 19:15:00 crc kubenswrapper[4981]: E0227 19:15:00.175309 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e1417f-019e-484a-afd2-05ae98b58cee" containerName="oc" Feb 27 19:15:00 crc kubenswrapper[4981]: I0227 19:15:00.175330 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e1417f-019e-484a-afd2-05ae98b58cee" containerName="oc" Feb 27 19:15:00 crc kubenswrapper[4981]: I0227 19:15:00.175564 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e1417f-019e-484a-afd2-05ae98b58cee" containerName="oc" Feb 27 19:15:00 crc kubenswrapper[4981]: I0227 19:15:00.176264 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-lvlv5" Feb 27 19:15:00 crc kubenswrapper[4981]: I0227 19:15:00.179524 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 19:15:00 crc kubenswrapper[4981]: I0227 19:15:00.180041 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 19:15:00 crc kubenswrapper[4981]: I0227 19:15:00.305966 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1dd8cc49-c6d8-4ccd-979e-d3b87fe70930-config-volume\") pod \"collect-profiles-29536995-lvlv5\" (UID: \"1dd8cc49-c6d8-4ccd-979e-d3b87fe70930\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-lvlv5" Feb 27 19:15:00 crc kubenswrapper[4981]: I0227 19:15:00.306073 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8flqw\" (UniqueName: \"kubernetes.io/projected/1dd8cc49-c6d8-4ccd-979e-d3b87fe70930-kube-api-access-8flqw\") pod \"collect-profiles-29536995-lvlv5\" (UID: \"1dd8cc49-c6d8-4ccd-979e-d3b87fe70930\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-lvlv5" Feb 27 19:15:00 crc kubenswrapper[4981]: I0227 19:15:00.306318 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1dd8cc49-c6d8-4ccd-979e-d3b87fe70930-secret-volume\") pod \"collect-profiles-29536995-lvlv5\" (UID: \"1dd8cc49-c6d8-4ccd-979e-d3b87fe70930\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-lvlv5" Feb 27 19:15:00 crc kubenswrapper[4981]: I0227 19:15:00.408251 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1dd8cc49-c6d8-4ccd-979e-d3b87fe70930-config-volume\") pod \"collect-profiles-29536995-lvlv5\" (UID: \"1dd8cc49-c6d8-4ccd-979e-d3b87fe70930\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-lvlv5" Feb 27 19:15:00 crc kubenswrapper[4981]: I0227 19:15:00.408356 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8flqw\" (UniqueName: \"kubernetes.io/projected/1dd8cc49-c6d8-4ccd-979e-d3b87fe70930-kube-api-access-8flqw\") pod \"collect-profiles-29536995-lvlv5\" (UID: \"1dd8cc49-c6d8-4ccd-979e-d3b87fe70930\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-lvlv5" Feb 27 19:15:00 crc kubenswrapper[4981]: I0227 19:15:00.408424 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1dd8cc49-c6d8-4ccd-979e-d3b87fe70930-secret-volume\") pod \"collect-profiles-29536995-lvlv5\" (UID: \"1dd8cc49-c6d8-4ccd-979e-d3b87fe70930\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-lvlv5" Feb 27 19:15:00 crc kubenswrapper[4981]: I0227 19:15:00.409460 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1dd8cc49-c6d8-4ccd-979e-d3b87fe70930-config-volume\") pod \"collect-profiles-29536995-lvlv5\" (UID: \"1dd8cc49-c6d8-4ccd-979e-d3b87fe70930\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-lvlv5" Feb 27 19:15:00 crc kubenswrapper[4981]: I0227 19:15:00.413372 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1dd8cc49-c6d8-4ccd-979e-d3b87fe70930-secret-volume\") pod \"collect-profiles-29536995-lvlv5\" (UID: \"1dd8cc49-c6d8-4ccd-979e-d3b87fe70930\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-lvlv5" Feb 27 19:15:00 crc kubenswrapper[4981]: I0227 19:15:00.426607 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8flqw\" (UniqueName: \"kubernetes.io/projected/1dd8cc49-c6d8-4ccd-979e-d3b87fe70930-kube-api-access-8flqw\") pod \"collect-profiles-29536995-lvlv5\" (UID: \"1dd8cc49-c6d8-4ccd-979e-d3b87fe70930\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-lvlv5" Feb 27 19:15:00 crc kubenswrapper[4981]: I0227 19:15:00.434750 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536995-lvlv5"] Feb 27 19:15:00 crc kubenswrapper[4981]: I0227 19:15:00.498802 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-lvlv5" Feb 27 19:15:00 crc kubenswrapper[4981]: I0227 19:15:00.936999 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536995-lvlv5"] Feb 27 19:15:01 crc kubenswrapper[4981]: I0227 19:15:01.423033 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-lvlv5" event={"ID":"1dd8cc49-c6d8-4ccd-979e-d3b87fe70930","Type":"ContainerStarted","Data":"ba200037e3f39e3b912dd5f7d1796c4bf3b465606951d81e1ef14c76f93cd42c"} Feb 27 19:15:06 crc kubenswrapper[4981]: I0227 19:15:06.483030 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-lvlv5" event={"ID":"1dd8cc49-c6d8-4ccd-979e-d3b87fe70930","Type":"ContainerStarted","Data":"17667cf1f6c6ff2ea58e0aee2b8e4b2f4510fb7bcb2a0b45776f1cfc06735d94"} Feb 27 19:15:07 crc kubenswrapper[4981]: I0227 19:15:07.503940 4981 generic.go:334] "Generic (PLEG): container finished" podID="1dd8cc49-c6d8-4ccd-979e-d3b87fe70930" containerID="17667cf1f6c6ff2ea58e0aee2b8e4b2f4510fb7bcb2a0b45776f1cfc06735d94" exitCode=0 Feb 27 19:15:07 crc kubenswrapper[4981]: I0227 19:15:07.504012 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-lvlv5" event={"ID":"1dd8cc49-c6d8-4ccd-979e-d3b87fe70930","Type":"ContainerDied","Data":"17667cf1f6c6ff2ea58e0aee2b8e4b2f4510fb7bcb2a0b45776f1cfc06735d94"} Feb 27 19:15:09 crc kubenswrapper[4981]: I0227 19:15:09.370476 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-lvlv5" Feb 27 19:15:09 crc kubenswrapper[4981]: I0227 19:15:09.494922 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1dd8cc49-c6d8-4ccd-979e-d3b87fe70930-secret-volume\") pod \"1dd8cc49-c6d8-4ccd-979e-d3b87fe70930\" (UID: \"1dd8cc49-c6d8-4ccd-979e-d3b87fe70930\") " Feb 27 19:15:09 crc kubenswrapper[4981]: I0227 19:15:09.495155 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1dd8cc49-c6d8-4ccd-979e-d3b87fe70930-config-volume\") pod \"1dd8cc49-c6d8-4ccd-979e-d3b87fe70930\" (UID: \"1dd8cc49-c6d8-4ccd-979e-d3b87fe70930\") " Feb 27 19:15:09 crc kubenswrapper[4981]: I0227 19:15:09.495211 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8flqw\" (UniqueName: \"kubernetes.io/projected/1dd8cc49-c6d8-4ccd-979e-d3b87fe70930-kube-api-access-8flqw\") pod \"1dd8cc49-c6d8-4ccd-979e-d3b87fe70930\" (UID: \"1dd8cc49-c6d8-4ccd-979e-d3b87fe70930\") " Feb 27 19:15:09 crc kubenswrapper[4981]: I0227 19:15:09.496468 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd8cc49-c6d8-4ccd-979e-d3b87fe70930-config-volume" (OuterVolumeSpecName: "config-volume") pod "1dd8cc49-c6d8-4ccd-979e-d3b87fe70930" (UID: "1dd8cc49-c6d8-4ccd-979e-d3b87fe70930"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:15:09 crc kubenswrapper[4981]: I0227 19:15:09.501379 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dd8cc49-c6d8-4ccd-979e-d3b87fe70930-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1dd8cc49-c6d8-4ccd-979e-d3b87fe70930" (UID: "1dd8cc49-c6d8-4ccd-979e-d3b87fe70930"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:15:09 crc kubenswrapper[4981]: I0227 19:15:09.501910 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd8cc49-c6d8-4ccd-979e-d3b87fe70930-kube-api-access-8flqw" (OuterVolumeSpecName: "kube-api-access-8flqw") pod "1dd8cc49-c6d8-4ccd-979e-d3b87fe70930" (UID: "1dd8cc49-c6d8-4ccd-979e-d3b87fe70930"). InnerVolumeSpecName "kube-api-access-8flqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:15:09 crc kubenswrapper[4981]: I0227 19:15:09.521392 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-lvlv5" event={"ID":"1dd8cc49-c6d8-4ccd-979e-d3b87fe70930","Type":"ContainerDied","Data":"ba200037e3f39e3b912dd5f7d1796c4bf3b465606951d81e1ef14c76f93cd42c"} Feb 27 19:15:09 crc kubenswrapper[4981]: I0227 19:15:09.521440 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba200037e3f39e3b912dd5f7d1796c4bf3b465606951d81e1ef14c76f93cd42c" Feb 27 19:15:09 crc kubenswrapper[4981]: I0227 19:15:09.521454 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-lvlv5" Feb 27 19:15:09 crc kubenswrapper[4981]: I0227 19:15:09.597595 4981 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1dd8cc49-c6d8-4ccd-979e-d3b87fe70930-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:09 crc kubenswrapper[4981]: I0227 19:15:09.597629 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8flqw\" (UniqueName: \"kubernetes.io/projected/1dd8cc49-c6d8-4ccd-979e-d3b87fe70930-kube-api-access-8flqw\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:09 crc kubenswrapper[4981]: I0227 19:15:09.597640 4981 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1dd8cc49-c6d8-4ccd-979e-d3b87fe70930-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:10 crc kubenswrapper[4981]: E0227 19:15:10.345728 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Feb 27 19:15:10 crc kubenswrapper[4981]: E0227 19:15:10.346417 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4v56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6ff4f098-86f0-4676-8254-f239843c7685): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:15:10 crc kubenswrapper[4981]: I0227 19:15:10.630380 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:15:10 crc kubenswrapper[4981]: E0227 19:15:10.630836 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:15:11 crc kubenswrapper[4981]: I0227 19:15:11.543833 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6wdt" event={"ID":"09fc44ca-39ea-428a-b743-728f222a63b9","Type":"ContainerStarted","Data":"9edb871cd88e79b92133db1766f9a27f731ed54bb4b7d1df11b1b8aa1c26f81c"} Feb 27 19:15:11 crc kubenswrapper[4981]: I0227 19:15:11.567085 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z6wdt" podStartSLOduration=121.034193281 podStartE2EDuration="2m17.567036361s" podCreationTimestamp="2026-02-27 19:12:54 +0000 UTC" firstStartedPulling="2026-02-27 19:14:54.407172448 +0000 UTC m=+1793.885953608" lastFinishedPulling="2026-02-27 19:15:10.940015528 +0000 UTC m=+1810.418796688" observedRunningTime="2026-02-27 19:15:11.563806291 +0000 UTC m=+1811.042587471" watchObservedRunningTime="2026-02-27 19:15:11.567036361 +0000 UTC m=+1811.045817531" Feb 27 19:15:14 crc kubenswrapper[4981]: I0227 19:15:14.953128 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z6wdt" Feb 27 19:15:14 crc kubenswrapper[4981]: I0227 19:15:14.953568 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z6wdt" Feb 27 19:15:15 crc kubenswrapper[4981]: I0227 19:15:15.025419 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z6wdt" Feb 27 19:15:18 crc kubenswrapper[4981]: I0227 19:15:18.125085 4981 scope.go:117] "RemoveContainer" containerID="461af97dfefde8d5e1d889fe3e1633599ffdbd87f9acf06cf35be9d51671319d" Feb 27 19:15:18 crc kubenswrapper[4981]: I0227 19:15:18.158395 4981 scope.go:117] "RemoveContainer" containerID="889670a03574258a741b7a2e3e7d293f4321e0eee5a13978816a46df965fa41a" Feb 27 19:15:18 crc kubenswrapper[4981]: I0227 19:15:18.199829 4981 scope.go:117] "RemoveContainer" containerID="5ccd439c0236c0fd87d5d5170d8ddc8da0b3f368ea8eed38ef9f510c5b01137e" Feb 27 19:15:18 crc kubenswrapper[4981]: I0227 19:15:18.249954 4981 scope.go:117] "RemoveContainer" containerID="d5bea82b461062ec73a068e1c79214fa87c4ce1664bda7fbba0200fe8e05c16a" Feb 27 19:15:19 crc kubenswrapper[4981]: I0227 19:15:19.632603 4981 generic.go:334] "Generic (PLEG): container finished" podID="5640be3b-ba9b-4530-8bf8-595f0428c3ee" containerID="4e3e729a2f7c99f11756596ccfffce4dcd67e9c1cabd75f3b556e598bd9ab27b" exitCode=0 Feb 27 19:15:19 crc kubenswrapper[4981]: I0227 19:15:19.641428 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ncckq" event={"ID":"5640be3b-ba9b-4530-8bf8-595f0428c3ee","Type":"ContainerDied","Data":"4e3e729a2f7c99f11756596ccfffce4dcd67e9c1cabd75f3b556e598bd9ab27b"} Feb 27 19:15:25 crc kubenswrapper[4981]: I0227 19:15:25.045598 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z6wdt" Feb 27 19:15:25 crc kubenswrapper[4981]: I0227 19:15:25.109007 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z6wdt"] Feb 27 19:15:25 crc kubenswrapper[4981]: I0227 19:15:25.631539 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:15:25 crc kubenswrapper[4981]: E0227 19:15:25.632172 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:15:25 crc kubenswrapper[4981]: I0227 19:15:25.713593 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z6wdt" podUID="09fc44ca-39ea-428a-b743-728f222a63b9" containerName="registry-server" containerID="cri-o://9edb871cd88e79b92133db1766f9a27f731ed54bb4b7d1df11b1b8aa1c26f81c" gracePeriod=2 Feb 27 19:15:34 crc kubenswrapper[4981]: E0227 19:15:34.953733 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9edb871cd88e79b92133db1766f9a27f731ed54bb4b7d1df11b1b8aa1c26f81c is running failed: container process not found" containerID="9edb871cd88e79b92133db1766f9a27f731ed54bb4b7d1df11b1b8aa1c26f81c" cmd=["grpc_health_probe","-addr=:50051"] Feb 27 19:15:34 crc kubenswrapper[4981]: E0227 19:15:34.954902 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9edb871cd88e79b92133db1766f9a27f731ed54bb4b7d1df11b1b8aa1c26f81c is running failed: container process not found" containerID="9edb871cd88e79b92133db1766f9a27f731ed54bb4b7d1df11b1b8aa1c26f81c" cmd=["grpc_health_probe","-addr=:50051"] Feb 27 19:15:34 crc kubenswrapper[4981]: E0227 19:15:34.956187 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9edb871cd88e79b92133db1766f9a27f731ed54bb4b7d1df11b1b8aa1c26f81c is running failed: container process not found" containerID="9edb871cd88e79b92133db1766f9a27f731ed54bb4b7d1df11b1b8aa1c26f81c" cmd=["grpc_health_probe","-addr=:50051"] Feb 27 19:15:34 crc kubenswrapper[4981]: E0227 19:15:34.956267 4981 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9edb871cd88e79b92133db1766f9a27f731ed54bb4b7d1df11b1b8aa1c26f81c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-z6wdt" podUID="09fc44ca-39ea-428a-b743-728f222a63b9" containerName="registry-server" Feb 27 19:15:36 crc kubenswrapper[4981]: I0227 19:15:36.628807 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:15:36 crc kubenswrapper[4981]: E0227 19:15:36.630706 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:15:36 crc kubenswrapper[4981]: I0227 19:15:36.842212 4981 generic.go:334] "Generic (PLEG): container finished" podID="09fc44ca-39ea-428a-b743-728f222a63b9" containerID="9edb871cd88e79b92133db1766f9a27f731ed54bb4b7d1df11b1b8aa1c26f81c" exitCode=0 Feb 27 19:15:36 crc kubenswrapper[4981]: I0227 19:15:36.842237 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6wdt" event={"ID":"09fc44ca-39ea-428a-b743-728f222a63b9","Type":"ContainerDied","Data":"9edb871cd88e79b92133db1766f9a27f731ed54bb4b7d1df11b1b8aa1c26f81c"} Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.479121 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.589075 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-credential-keys\") pod \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.589181 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5q8l\" (UniqueName: \"kubernetes.io/projected/5640be3b-ba9b-4530-8bf8-595f0428c3ee-kube-api-access-d5q8l\") pod \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.589258 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-fernet-keys\") pod \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.589300 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-combined-ca-bundle\") pod \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.589339 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-config-data\") pod \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.589375 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-scripts\") pod \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\" (UID: \"5640be3b-ba9b-4530-8bf8-595f0428c3ee\") " Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.595413 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5640be3b-ba9b-4530-8bf8-595f0428c3ee-kube-api-access-d5q8l" (OuterVolumeSpecName: "kube-api-access-d5q8l") pod "5640be3b-ba9b-4530-8bf8-595f0428c3ee" (UID: "5640be3b-ba9b-4530-8bf8-595f0428c3ee"). InnerVolumeSpecName "kube-api-access-d5q8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.596133 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5640be3b-ba9b-4530-8bf8-595f0428c3ee" (UID: "5640be3b-ba9b-4530-8bf8-595f0428c3ee"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.599229 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-scripts" (OuterVolumeSpecName: "scripts") pod "5640be3b-ba9b-4530-8bf8-595f0428c3ee" (UID: "5640be3b-ba9b-4530-8bf8-595f0428c3ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.608901 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5640be3b-ba9b-4530-8bf8-595f0428c3ee" (UID: "5640be3b-ba9b-4530-8bf8-595f0428c3ee"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.614666 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5640be3b-ba9b-4530-8bf8-595f0428c3ee" (UID: "5640be3b-ba9b-4530-8bf8-595f0428c3ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.625368 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-config-data" (OuterVolumeSpecName: "config-data") pod "5640be3b-ba9b-4530-8bf8-595f0428c3ee" (UID: "5640be3b-ba9b-4530-8bf8-595f0428c3ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.691855 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5q8l\" (UniqueName: \"kubernetes.io/projected/5640be3b-ba9b-4530-8bf8-595f0428c3ee-kube-api-access-d5q8l\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.691885 4981 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.691894 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.691901 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.691911 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.691918 4981 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5640be3b-ba9b-4530-8bf8-595f0428c3ee-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.853084 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ncckq" event={"ID":"5640be3b-ba9b-4530-8bf8-595f0428c3ee","Type":"ContainerDied","Data":"af684c0980c28dc3bb84c1d5b88876178736e72706aa776697aaea5b78b1d202"} Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.853139 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af684c0980c28dc3bb84c1d5b88876178736e72706aa776697aaea5b78b1d202" Feb 27 19:15:37 crc kubenswrapper[4981]: I0227 19:15:37.853141 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ncckq" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.626697 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6b879f46f9-hf222"] Feb 27 19:15:38 crc kubenswrapper[4981]: E0227 19:15:38.627176 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5640be3b-ba9b-4530-8bf8-595f0428c3ee" containerName="keystone-bootstrap" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.627198 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5640be3b-ba9b-4530-8bf8-595f0428c3ee" containerName="keystone-bootstrap" Feb 27 19:15:38 crc kubenswrapper[4981]: E0227 19:15:38.627244 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd8cc49-c6d8-4ccd-979e-d3b87fe70930" containerName="collect-profiles" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.627253 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd8cc49-c6d8-4ccd-979e-d3b87fe70930" containerName="collect-profiles" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.627460 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="5640be3b-ba9b-4530-8bf8-595f0428c3ee" containerName="keystone-bootstrap" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.627486 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dd8cc49-c6d8-4ccd-979e-d3b87fe70930" containerName="collect-profiles" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.628486 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.632126 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.632467 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.633004 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4zdz2" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.633269 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.633307 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.635314 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.658888 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6b879f46f9-hf222"] Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.712288 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-credential-keys\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.712389 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ls6z\" (UniqueName: \"kubernetes.io/projected/087da308-30ee-4a17-945a-844baf0cf4b4-kube-api-access-4ls6z\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.712450 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-combined-ca-bundle\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.712547 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-config-data\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.712578 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-scripts\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.712607 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-internal-tls-certs\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.713143 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-fernet-keys\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.713392 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-public-tls-certs\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.815909 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-config-data\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.815980 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-scripts\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.816013 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-internal-tls-certs\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.816079 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-fernet-keys\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.816131 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-public-tls-certs\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.816192 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-credential-keys\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.816235 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ls6z\" (UniqueName: \"kubernetes.io/projected/087da308-30ee-4a17-945a-844baf0cf4b4-kube-api-access-4ls6z\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.816265 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-combined-ca-bundle\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.824592 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-public-tls-certs\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.824798 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-combined-ca-bundle\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.824823 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-fernet-keys\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.825156 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-scripts\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.828014 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-config-data\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.831903 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-internal-tls-certs\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.832508 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-credential-keys\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.836331 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ls6z\" (UniqueName: \"kubernetes.io/projected/087da308-30ee-4a17-945a-844baf0cf4b4-kube-api-access-4ls6z\") pod \"keystone-6b879f46f9-hf222\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:38 crc kubenswrapper[4981]: I0227 19:15:38.957165 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:15:41 crc kubenswrapper[4981]: I0227 19:15:41.141580 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6wdt" Feb 27 19:15:41 crc kubenswrapper[4981]: I0227 19:15:41.251860 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6b879f46f9-hf222"] Feb 27 19:15:41 crc kubenswrapper[4981]: I0227 19:15:41.277020 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sszw\" (UniqueName: \"kubernetes.io/projected/09fc44ca-39ea-428a-b743-728f222a63b9-kube-api-access-8sszw\") pod \"09fc44ca-39ea-428a-b743-728f222a63b9\" (UID: \"09fc44ca-39ea-428a-b743-728f222a63b9\") " Feb 27 19:15:41 crc kubenswrapper[4981]: I0227 19:15:41.277381 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09fc44ca-39ea-428a-b743-728f222a63b9-utilities\") pod \"09fc44ca-39ea-428a-b743-728f222a63b9\" (UID: \"09fc44ca-39ea-428a-b743-728f222a63b9\") " Feb 27 19:15:41 crc kubenswrapper[4981]: I0227 19:15:41.277563 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09fc44ca-39ea-428a-b743-728f222a63b9-catalog-content\") pod \"09fc44ca-39ea-428a-b743-728f222a63b9\" (UID: \"09fc44ca-39ea-428a-b743-728f222a63b9\") " Feb 27 19:15:41 crc kubenswrapper[4981]: I0227 19:15:41.278521 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09fc44ca-39ea-428a-b743-728f222a63b9-utilities" (OuterVolumeSpecName: "utilities") pod "09fc44ca-39ea-428a-b743-728f222a63b9" (UID: "09fc44ca-39ea-428a-b743-728f222a63b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:15:41 crc kubenswrapper[4981]: I0227 19:15:41.291481 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09fc44ca-39ea-428a-b743-728f222a63b9-kube-api-access-8sszw" (OuterVolumeSpecName: "kube-api-access-8sszw") pod "09fc44ca-39ea-428a-b743-728f222a63b9" (UID: "09fc44ca-39ea-428a-b743-728f222a63b9"). InnerVolumeSpecName "kube-api-access-8sszw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:15:41 crc kubenswrapper[4981]: I0227 19:15:41.348084 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09fc44ca-39ea-428a-b743-728f222a63b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09fc44ca-39ea-428a-b743-728f222a63b9" (UID: "09fc44ca-39ea-428a-b743-728f222a63b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:15:41 crc kubenswrapper[4981]: I0227 19:15:41.379792 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09fc44ca-39ea-428a-b743-728f222a63b9-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:41 crc kubenswrapper[4981]: I0227 19:15:41.379889 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09fc44ca-39ea-428a-b743-728f222a63b9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:41 crc kubenswrapper[4981]: I0227 19:15:41.379928 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sszw\" (UniqueName: \"kubernetes.io/projected/09fc44ca-39ea-428a-b743-728f222a63b9-kube-api-access-8sszw\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:41 crc kubenswrapper[4981]: I0227 19:15:41.903483 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6wdt" event={"ID":"09fc44ca-39ea-428a-b743-728f222a63b9","Type":"ContainerDied","Data":"03aa956917165b3b8c9c22c2a08f0a8bcf36f914b15127f366cb3bada9a484aa"} Feb 27 19:15:41 crc kubenswrapper[4981]: I0227 19:15:41.903598 4981 scope.go:117] "RemoveContainer" containerID="9edb871cd88e79b92133db1766f9a27f731ed54bb4b7d1df11b1b8aa1c26f81c" Feb 27 19:15:41 crc kubenswrapper[4981]: I0227 19:15:41.904047 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6wdt" Feb 27 19:15:41 crc kubenswrapper[4981]: I0227 19:15:41.905764 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6b879f46f9-hf222" event={"ID":"087da308-30ee-4a17-945a-844baf0cf4b4","Type":"ContainerStarted","Data":"59a5b965f6d87fa3d0946ea826d976074eb37a38f37b9809cebb0d08e9b762b3"} Feb 27 19:15:41 crc kubenswrapper[4981]: I0227 19:15:41.905814 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6b879f46f9-hf222" event={"ID":"087da308-30ee-4a17-945a-844baf0cf4b4","Type":"ContainerStarted","Data":"04acb6c88d2caf2e0efc687ab30e047e3b37bdcd4d22af1b19197156d2276983"} Feb 27 19:15:41 crc kubenswrapper[4981]: I0227 19:15:41.927015 4981 scope.go:117] "RemoveContainer" containerID="b0fd7ac9278fce0a9c1c3348a852e2ff7d0f986c32cb49e3a478ebb3086bc72a" Feb 27 19:15:41 crc kubenswrapper[4981]: I0227 19:15:41.927600 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z6wdt"] Feb 27 19:15:41 crc kubenswrapper[4981]: I0227 19:15:41.937756 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z6wdt"] Feb 27 19:15:41 crc kubenswrapper[4981]: I0227 19:15:41.960125 4981 scope.go:117] "RemoveContainer" containerID="fe15e42c3a6b49bc2586216aa1e03e38a614b95e35172d49e387b52e6069d49c" Feb 27 19:15:42 crc kubenswrapper[4981]: E0227 19:15:42.079473 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 27 19:15:42 crc kubenswrapper[4981]: E0227 19:15:42.079680 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4v56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6ff4f098-86f0-4676-8254-f239843c7685): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 19:15:42 crc kubenswrapper[4981]: E0227 19:15:42.080950 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="6ff4f098-86f0-4676-8254-f239843c7685" Feb 27 19:15:42 crc kubenswrapper[4981]: I0227 19:15:42.914775 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6ff4f098-86f0-4676-8254-f239843c7685" containerName="ceilometer-central-agent" containerID="cri-o://4c5f12a7c4cf24be30b08af6ae2a985fa57fd2f7fb324c623e373af7473f4803" gracePeriod=30 Feb 27 19:15:42 crc kubenswrapper[4981]: I0227 19:15:42.914860 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6ff4f098-86f0-4676-8254-f239843c7685" containerName="ceilometer-notification-agent" containerID="cri-o://0f3ddb0627c0a4ba824a5eb65b289ef026e7cb41e90aa1877237d9e381f2c8e0" gracePeriod=30 Feb 27 19:15:43 crc kubenswrapper[4981]: I0227 19:15:43.641088 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09fc44ca-39ea-428a-b743-728f222a63b9" path="/var/lib/kubelet/pods/09fc44ca-39ea-428a-b743-728f222a63b9/volumes" Feb 27 19:15:44 crc kubenswrapper[4981]: I0227 19:15:44.936133 4981 generic.go:334] "Generic (PLEG): container finished" podID="6ff4f098-86f0-4676-8254-f239843c7685" containerID="0f3ddb0627c0a4ba824a5eb65b289ef026e7cb41e90aa1877237d9e381f2c8e0" exitCode=0 Feb 27 19:15:44 crc kubenswrapper[4981]: I0227 19:15:44.936219 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ff4f098-86f0-4676-8254-f239843c7685","Type":"ContainerDied","Data":"0f3ddb0627c0a4ba824a5eb65b289ef026e7cb41e90aa1877237d9e381f2c8e0"} Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.638957 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.664261 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6b879f46f9-hf222" podStartSLOduration=7.664239168 podStartE2EDuration="7.664239168s" podCreationTimestamp="2026-02-27 19:15:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:15:42.963750482 +0000 UTC m=+1842.442531642" watchObservedRunningTime="2026-02-27 19:15:45.664239168 +0000 UTC m=+1845.143020328" Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.812683 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-combined-ca-bundle\") pod \"6ff4f098-86f0-4676-8254-f239843c7685\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.812904 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ff4f098-86f0-4676-8254-f239843c7685-log-httpd\") pod \"6ff4f098-86f0-4676-8254-f239843c7685\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.812973 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4v56\" (UniqueName: \"kubernetes.io/projected/6ff4f098-86f0-4676-8254-f239843c7685-kube-api-access-w4v56\") pod \"6ff4f098-86f0-4676-8254-f239843c7685\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.813137 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ff4f098-86f0-4676-8254-f239843c7685-run-httpd\") pod \"6ff4f098-86f0-4676-8254-f239843c7685\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.813251 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-scripts\") pod \"6ff4f098-86f0-4676-8254-f239843c7685\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.813298 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-config-data\") pod \"6ff4f098-86f0-4676-8254-f239843c7685\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.813341 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-sg-core-conf-yaml\") pod \"6ff4f098-86f0-4676-8254-f239843c7685\" (UID: \"6ff4f098-86f0-4676-8254-f239843c7685\") " Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.813918 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ff4f098-86f0-4676-8254-f239843c7685-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6ff4f098-86f0-4676-8254-f239843c7685" (UID: "6ff4f098-86f0-4676-8254-f239843c7685"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.814498 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ff4f098-86f0-4676-8254-f239843c7685-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6ff4f098-86f0-4676-8254-f239843c7685" (UID: "6ff4f098-86f0-4676-8254-f239843c7685"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.815241 4981 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ff4f098-86f0-4676-8254-f239843c7685-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.815270 4981 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ff4f098-86f0-4676-8254-f239843c7685-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.819817 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ff4f098-86f0-4676-8254-f239843c7685-kube-api-access-w4v56" (OuterVolumeSpecName: "kube-api-access-w4v56") pod "6ff4f098-86f0-4676-8254-f239843c7685" (UID: "6ff4f098-86f0-4676-8254-f239843c7685"). InnerVolumeSpecName "kube-api-access-w4v56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.820206 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6ff4f098-86f0-4676-8254-f239843c7685" (UID: "6ff4f098-86f0-4676-8254-f239843c7685"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.825218 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-scripts" (OuterVolumeSpecName: "scripts") pod "6ff4f098-86f0-4676-8254-f239843c7685" (UID: "6ff4f098-86f0-4676-8254-f239843c7685"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.882772 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-config-data" (OuterVolumeSpecName: "config-data") pod "6ff4f098-86f0-4676-8254-f239843c7685" (UID: "6ff4f098-86f0-4676-8254-f239843c7685"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.894259 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ff4f098-86f0-4676-8254-f239843c7685" (UID: "6ff4f098-86f0-4676-8254-f239843c7685"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.916682 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.916716 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4v56\" (UniqueName: \"kubernetes.io/projected/6ff4f098-86f0-4676-8254-f239843c7685-kube-api-access-w4v56\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.916729 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.916739 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.916748 4981 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ff4f098-86f0-4676-8254-f239843c7685-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.953279 4981 generic.go:334] "Generic (PLEG): container finished" podID="6ff4f098-86f0-4676-8254-f239843c7685" containerID="4c5f12a7c4cf24be30b08af6ae2a985fa57fd2f7fb324c623e373af7473f4803" exitCode=0 Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.953345 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ff4f098-86f0-4676-8254-f239843c7685","Type":"ContainerDied","Data":"4c5f12a7c4cf24be30b08af6ae2a985fa57fd2f7fb324c623e373af7473f4803"} Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.953429 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ff4f098-86f0-4676-8254-f239843c7685","Type":"ContainerDied","Data":"20cdf877bc66f784b9288876e811d2ff9e854b4a39bc5a4ebe8d81a17ba2771b"} Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.953455 4981 scope.go:117] "RemoveContainer" containerID="0f3ddb0627c0a4ba824a5eb65b289ef026e7cb41e90aa1877237d9e381f2c8e0" Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.953372 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:15:45 crc kubenswrapper[4981]: I0227 19:15:45.981584 4981 scope.go:117] "RemoveContainer" containerID="4c5f12a7c4cf24be30b08af6ae2a985fa57fd2f7fb324c623e373af7473f4803" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.001974 4981 scope.go:117] "RemoveContainer" containerID="0f3ddb0627c0a4ba824a5eb65b289ef026e7cb41e90aa1877237d9e381f2c8e0" Feb 27 19:15:46 crc kubenswrapper[4981]: E0227 19:15:46.003229 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f3ddb0627c0a4ba824a5eb65b289ef026e7cb41e90aa1877237d9e381f2c8e0\": container with ID starting with 0f3ddb0627c0a4ba824a5eb65b289ef026e7cb41e90aa1877237d9e381f2c8e0 not found: ID does not exist" containerID="0f3ddb0627c0a4ba824a5eb65b289ef026e7cb41e90aa1877237d9e381f2c8e0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.003267 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f3ddb0627c0a4ba824a5eb65b289ef026e7cb41e90aa1877237d9e381f2c8e0"} err="failed to get container status \"0f3ddb0627c0a4ba824a5eb65b289ef026e7cb41e90aa1877237d9e381f2c8e0\": rpc error: code = NotFound desc = could not find container \"0f3ddb0627c0a4ba824a5eb65b289ef026e7cb41e90aa1877237d9e381f2c8e0\": container with ID starting with 0f3ddb0627c0a4ba824a5eb65b289ef026e7cb41e90aa1877237d9e381f2c8e0 not found: ID does not exist" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.003302 4981 scope.go:117] "RemoveContainer" containerID="4c5f12a7c4cf24be30b08af6ae2a985fa57fd2f7fb324c623e373af7473f4803" Feb 27 19:15:46 crc kubenswrapper[4981]: E0227 19:15:46.003594 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c5f12a7c4cf24be30b08af6ae2a985fa57fd2f7fb324c623e373af7473f4803\": container with ID starting with 4c5f12a7c4cf24be30b08af6ae2a985fa57fd2f7fb324c623e373af7473f4803 not found: ID does not exist" containerID="4c5f12a7c4cf24be30b08af6ae2a985fa57fd2f7fb324c623e373af7473f4803" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.003616 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c5f12a7c4cf24be30b08af6ae2a985fa57fd2f7fb324c623e373af7473f4803"} err="failed to get container status \"4c5f12a7c4cf24be30b08af6ae2a985fa57fd2f7fb324c623e373af7473f4803\": rpc error: code = NotFound desc = could not find container \"4c5f12a7c4cf24be30b08af6ae2a985fa57fd2f7fb324c623e373af7473f4803\": container with ID starting with 4c5f12a7c4cf24be30b08af6ae2a985fa57fd2f7fb324c623e373af7473f4803 not found: ID does not exist" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.032763 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.050153 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.087155 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:15:46 crc kubenswrapper[4981]: E0227 19:15:46.087946 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09fc44ca-39ea-428a-b743-728f222a63b9" containerName="extract-utilities" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.088071 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="09fc44ca-39ea-428a-b743-728f222a63b9" containerName="extract-utilities" Feb 27 19:15:46 crc kubenswrapper[4981]: E0227 19:15:46.088165 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09fc44ca-39ea-428a-b743-728f222a63b9" containerName="extract-content" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.088246 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="09fc44ca-39ea-428a-b743-728f222a63b9" containerName="extract-content" Feb 27 19:15:46 crc kubenswrapper[4981]: E0227 19:15:46.088327 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09fc44ca-39ea-428a-b743-728f222a63b9" containerName="registry-server" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.088390 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="09fc44ca-39ea-428a-b743-728f222a63b9" containerName="registry-server" Feb 27 19:15:46 crc kubenswrapper[4981]: E0227 19:15:46.088468 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff4f098-86f0-4676-8254-f239843c7685" containerName="ceilometer-notification-agent" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.088539 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff4f098-86f0-4676-8254-f239843c7685" containerName="ceilometer-notification-agent" Feb 27 19:15:46 crc kubenswrapper[4981]: E0227 19:15:46.088613 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff4f098-86f0-4676-8254-f239843c7685" containerName="ceilometer-central-agent" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.088682 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff4f098-86f0-4676-8254-f239843c7685" containerName="ceilometer-central-agent" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.088989 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="09fc44ca-39ea-428a-b743-728f222a63b9" containerName="registry-server" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.089125 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ff4f098-86f0-4676-8254-f239843c7685" containerName="ceilometer-notification-agent" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.089402 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ff4f098-86f0-4676-8254-f239843c7685" containerName="ceilometer-central-agent" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.091518 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.094364 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.094587 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.097607 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.120268 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d185dc90-7079-44d3-b2a0-e6ca77211e46-run-httpd\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.120692 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.120750 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-scripts\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.120835 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d185dc90-7079-44d3-b2a0-e6ca77211e46-log-httpd\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.120898 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-config-data\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.120937 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.120986 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kj4w\" (UniqueName: \"kubernetes.io/projected/d185dc90-7079-44d3-b2a0-e6ca77211e46-kube-api-access-6kj4w\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.155756 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:15:46 crc kubenswrapper[4981]: E0227 19:15:46.156760 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-6kj4w log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="d185dc90-7079-44d3-b2a0-e6ca77211e46" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.221874 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kj4w\" (UniqueName: \"kubernetes.io/projected/d185dc90-7079-44d3-b2a0-e6ca77211e46-kube-api-access-6kj4w\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.221949 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d185dc90-7079-44d3-b2a0-e6ca77211e46-run-httpd\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.221974 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.222018 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-scripts\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.222076 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d185dc90-7079-44d3-b2a0-e6ca77211e46-log-httpd\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.222133 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-config-data\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.222165 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.223023 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d185dc90-7079-44d3-b2a0-e6ca77211e46-log-httpd\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.223923 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d185dc90-7079-44d3-b2a0-e6ca77211e46-run-httpd\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.228189 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.228191 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.228220 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-config-data\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.228852 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-scripts\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.237972 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kj4w\" (UniqueName: \"kubernetes.io/projected/d185dc90-7079-44d3-b2a0-e6ca77211e46-kube-api-access-6kj4w\") pod \"ceilometer-0\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.962625 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:15:46 crc kubenswrapper[4981]: I0227 19:15:46.974863 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.036606 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-sg-core-conf-yaml\") pod \"d185dc90-7079-44d3-b2a0-e6ca77211e46\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.036764 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d185dc90-7079-44d3-b2a0-e6ca77211e46-run-httpd\") pod \"d185dc90-7079-44d3-b2a0-e6ca77211e46\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.036858 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-scripts\") pod \"d185dc90-7079-44d3-b2a0-e6ca77211e46\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.036959 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-combined-ca-bundle\") pod \"d185dc90-7079-44d3-b2a0-e6ca77211e46\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.036987 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-config-data\") pod \"d185dc90-7079-44d3-b2a0-e6ca77211e46\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.037014 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d185dc90-7079-44d3-b2a0-e6ca77211e46-log-httpd\") pod \"d185dc90-7079-44d3-b2a0-e6ca77211e46\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.037041 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kj4w\" (UniqueName: \"kubernetes.io/projected/d185dc90-7079-44d3-b2a0-e6ca77211e46-kube-api-access-6kj4w\") pod \"d185dc90-7079-44d3-b2a0-e6ca77211e46\" (UID: \"d185dc90-7079-44d3-b2a0-e6ca77211e46\") " Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.037260 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d185dc90-7079-44d3-b2a0-e6ca77211e46-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d185dc90-7079-44d3-b2a0-e6ca77211e46" (UID: "d185dc90-7079-44d3-b2a0-e6ca77211e46"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.037520 4981 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d185dc90-7079-44d3-b2a0-e6ca77211e46-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.037559 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d185dc90-7079-44d3-b2a0-e6ca77211e46-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d185dc90-7079-44d3-b2a0-e6ca77211e46" (UID: "d185dc90-7079-44d3-b2a0-e6ca77211e46"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.042478 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d185dc90-7079-44d3-b2a0-e6ca77211e46" (UID: "d185dc90-7079-44d3-b2a0-e6ca77211e46"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.042540 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d185dc90-7079-44d3-b2a0-e6ca77211e46" (UID: "d185dc90-7079-44d3-b2a0-e6ca77211e46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.043422 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-config-data" (OuterVolumeSpecName: "config-data") pod "d185dc90-7079-44d3-b2a0-e6ca77211e46" (UID: "d185dc90-7079-44d3-b2a0-e6ca77211e46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.044647 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d185dc90-7079-44d3-b2a0-e6ca77211e46-kube-api-access-6kj4w" (OuterVolumeSpecName: "kube-api-access-6kj4w") pod "d185dc90-7079-44d3-b2a0-e6ca77211e46" (UID: "d185dc90-7079-44d3-b2a0-e6ca77211e46"). InnerVolumeSpecName "kube-api-access-6kj4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.045160 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-scripts" (OuterVolumeSpecName: "scripts") pod "d185dc90-7079-44d3-b2a0-e6ca77211e46" (UID: "d185dc90-7079-44d3-b2a0-e6ca77211e46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.138878 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.138923 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.138939 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.138951 4981 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d185dc90-7079-44d3-b2a0-e6ca77211e46-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.138965 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kj4w\" (UniqueName: \"kubernetes.io/projected/d185dc90-7079-44d3-b2a0-e6ca77211e46-kube-api-access-6kj4w\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.138977 4981 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d185dc90-7079-44d3-b2a0-e6ca77211e46-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.628594 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:15:47 crc kubenswrapper[4981]: E0227 19:15:47.629154 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.642005 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ff4f098-86f0-4676-8254-f239843c7685" path="/var/lib/kubelet/pods/6ff4f098-86f0-4676-8254-f239843c7685/volumes" Feb 27 19:15:47 crc kubenswrapper[4981]: I0227 19:15:47.972774 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.038604 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.044619 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.070322 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.072532 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.079840 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.080282 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.096360 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.157736 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-scripts\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.157990 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e652cdc7-a577-4e73-99da-01c7fc2c45f9-run-httpd\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.158594 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-config-data\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.158683 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.158753 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e652cdc7-a577-4e73-99da-01c7fc2c45f9-log-httpd\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.158812 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.158947 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89hpt\" (UniqueName: \"kubernetes.io/projected/e652cdc7-a577-4e73-99da-01c7fc2c45f9-kube-api-access-89hpt\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.261223 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-config-data\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.261283 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.261316 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e652cdc7-a577-4e73-99da-01c7fc2c45f9-log-httpd\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.261337 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.261368 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89hpt\" (UniqueName: \"kubernetes.io/projected/e652cdc7-a577-4e73-99da-01c7fc2c45f9-kube-api-access-89hpt\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.261416 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-scripts\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.261449 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e652cdc7-a577-4e73-99da-01c7fc2c45f9-run-httpd\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.262174 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e652cdc7-a577-4e73-99da-01c7fc2c45f9-log-httpd\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.262211 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e652cdc7-a577-4e73-99da-01c7fc2c45f9-run-httpd\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.267553 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-config-data\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.267805 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-scripts\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.268472 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.272976 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.289962 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89hpt\" (UniqueName: \"kubernetes.io/projected/e652cdc7-a577-4e73-99da-01c7fc2c45f9-kube-api-access-89hpt\") pod \"ceilometer-0\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.400689 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.883695 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:15:48 crc kubenswrapper[4981]: I0227 19:15:48.981682 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e652cdc7-a577-4e73-99da-01c7fc2c45f9","Type":"ContainerStarted","Data":"254e459cf95d19458b35e9b7f771d7b0becf197d5ba97e042fc05b8df94b2b28"} Feb 27 19:15:49 crc kubenswrapper[4981]: I0227 19:15:49.709431 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d185dc90-7079-44d3-b2a0-e6ca77211e46" path="/var/lib/kubelet/pods/d185dc90-7079-44d3-b2a0-e6ca77211e46/volumes" Feb 27 19:15:50 crc kubenswrapper[4981]: I0227 19:15:50.998132 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e652cdc7-a577-4e73-99da-01c7fc2c45f9","Type":"ContainerStarted","Data":"5fab91688329874bd31d1532c5de0a3d1f511968acfffc988f1258be6405b7f5"} Feb 27 19:15:53 crc kubenswrapper[4981]: I0227 19:15:53.015085 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e652cdc7-a577-4e73-99da-01c7fc2c45f9","Type":"ContainerStarted","Data":"eab0c9776977ec234e7403e6cd9f94b0ef6ead48601bb5a63aaa5a6bdb248954"} Feb 27 19:15:55 crc kubenswrapper[4981]: I0227 19:15:55.035863 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e652cdc7-a577-4e73-99da-01c7fc2c45f9","Type":"ContainerStarted","Data":"854bb8d7ddb95bc9d55fff3d29b6f314411516fe37426da7367f570ec25b783f"} Feb 27 19:15:58 crc kubenswrapper[4981]: I0227 19:15:58.629073 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:15:58 crc kubenswrapper[4981]: E0227 19:15:58.629986 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:16:00 crc kubenswrapper[4981]: I0227 19:16:00.141015 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536996-zdg5k"] Feb 27 19:16:00 crc kubenswrapper[4981]: I0227 19:16:00.143297 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536996-zdg5k" Feb 27 19:16:00 crc kubenswrapper[4981]: I0227 19:16:00.147440 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:16:00 crc kubenswrapper[4981]: I0227 19:16:00.147673 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:16:00 crc kubenswrapper[4981]: I0227 19:16:00.147956 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:16:00 crc kubenswrapper[4981]: I0227 19:16:00.159697 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536996-zdg5k"] Feb 27 19:16:00 crc kubenswrapper[4981]: I0227 19:16:00.296486 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcjlb\" (UniqueName: \"kubernetes.io/projected/cb159f40-08a1-4c27-9aa5-479f30ee1974-kube-api-access-qcjlb\") pod \"auto-csr-approver-29536996-zdg5k\" (UID: \"cb159f40-08a1-4c27-9aa5-479f30ee1974\") " pod="openshift-infra/auto-csr-approver-29536996-zdg5k" Feb 27 19:16:00 crc kubenswrapper[4981]: I0227 19:16:00.399345 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcjlb\" (UniqueName: \"kubernetes.io/projected/cb159f40-08a1-4c27-9aa5-479f30ee1974-kube-api-access-qcjlb\") pod \"auto-csr-approver-29536996-zdg5k\" (UID: \"cb159f40-08a1-4c27-9aa5-479f30ee1974\") " pod="openshift-infra/auto-csr-approver-29536996-zdg5k" Feb 27 19:16:00 crc kubenswrapper[4981]: I0227 19:16:00.416916 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcjlb\" (UniqueName: \"kubernetes.io/projected/cb159f40-08a1-4c27-9aa5-479f30ee1974-kube-api-access-qcjlb\") pod \"auto-csr-approver-29536996-zdg5k\" (UID: \"cb159f40-08a1-4c27-9aa5-479f30ee1974\") " pod="openshift-infra/auto-csr-approver-29536996-zdg5k" Feb 27 19:16:00 crc kubenswrapper[4981]: I0227 19:16:00.560691 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536996-zdg5k" Feb 27 19:16:00 crc kubenswrapper[4981]: I0227 19:16:00.988120 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536996-zdg5k"] Feb 27 19:16:01 crc kubenswrapper[4981]: I0227 19:16:01.089903 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536996-zdg5k" event={"ID":"cb159f40-08a1-4c27-9aa5-479f30ee1974","Type":"ContainerStarted","Data":"afe82482c23781de2492037964636e48b21f19d40f1a70589031a5faf8fd4bf4"} Feb 27 19:16:01 crc kubenswrapper[4981]: I0227 19:16:01.093516 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e652cdc7-a577-4e73-99da-01c7fc2c45f9","Type":"ContainerStarted","Data":"060cc35cfd54b06974b6edfe87c00a5ecd4952b42c0c01437d0e2bce0da0c253"} Feb 27 19:16:01 crc kubenswrapper[4981]: I0227 19:16:01.093708 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 19:16:01 crc kubenswrapper[4981]: I0227 19:16:01.133931 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9215202580000001 podStartE2EDuration="13.133910323s" podCreationTimestamp="2026-02-27 19:15:48 +0000 UTC" firstStartedPulling="2026-02-27 19:15:48.882413529 +0000 UTC m=+1848.361194689" lastFinishedPulling="2026-02-27 19:16:00.094803594 +0000 UTC m=+1859.573584754" observedRunningTime="2026-02-27 19:16:01.124789172 +0000 UTC m=+1860.603570332" watchObservedRunningTime="2026-02-27 19:16:01.133910323 +0000 UTC m=+1860.612691483" Feb 27 19:16:04 crc kubenswrapper[4981]: I0227 19:16:04.121134 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536996-zdg5k" event={"ID":"cb159f40-08a1-4c27-9aa5-479f30ee1974","Type":"ContainerStarted","Data":"d541bf917ce5a598e79b6aa38ac4d242ca09683b99f1ccd4a0cff4489d84d2f0"} Feb 27 19:16:04 crc kubenswrapper[4981]: I0227 19:16:04.143312 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536996-zdg5k" podStartSLOduration=1.511766686 podStartE2EDuration="4.143289734s" podCreationTimestamp="2026-02-27 19:16:00 +0000 UTC" firstStartedPulling="2026-02-27 19:16:00.992745282 +0000 UTC m=+1860.471526432" lastFinishedPulling="2026-02-27 19:16:03.62426832 +0000 UTC m=+1863.103049480" observedRunningTime="2026-02-27 19:16:04.13745332 +0000 UTC m=+1863.616234500" watchObservedRunningTime="2026-02-27 19:16:04.143289734 +0000 UTC m=+1863.622070894" Feb 27 19:16:05 crc kubenswrapper[4981]: I0227 19:16:05.129688 4981 generic.go:334] "Generic (PLEG): container finished" podID="cb159f40-08a1-4c27-9aa5-479f30ee1974" containerID="d541bf917ce5a598e79b6aa38ac4d242ca09683b99f1ccd4a0cff4489d84d2f0" exitCode=0 Feb 27 19:16:05 crc kubenswrapper[4981]: I0227 19:16:05.129743 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536996-zdg5k" event={"ID":"cb159f40-08a1-4c27-9aa5-479f30ee1974","Type":"ContainerDied","Data":"d541bf917ce5a598e79b6aa38ac4d242ca09683b99f1ccd4a0cff4489d84d2f0"} Feb 27 19:16:06 crc kubenswrapper[4981]: I0227 19:16:06.540504 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536996-zdg5k" Feb 27 19:16:06 crc kubenswrapper[4981]: I0227 19:16:06.680887 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcjlb\" (UniqueName: \"kubernetes.io/projected/cb159f40-08a1-4c27-9aa5-479f30ee1974-kube-api-access-qcjlb\") pod \"cb159f40-08a1-4c27-9aa5-479f30ee1974\" (UID: \"cb159f40-08a1-4c27-9aa5-479f30ee1974\") " Feb 27 19:16:06 crc kubenswrapper[4981]: I0227 19:16:06.692553 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb159f40-08a1-4c27-9aa5-479f30ee1974-kube-api-access-qcjlb" (OuterVolumeSpecName: "kube-api-access-qcjlb") pod "cb159f40-08a1-4c27-9aa5-479f30ee1974" (UID: "cb159f40-08a1-4c27-9aa5-479f30ee1974"). InnerVolumeSpecName "kube-api-access-qcjlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:16:06 crc kubenswrapper[4981]: I0227 19:16:06.783559 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcjlb\" (UniqueName: \"kubernetes.io/projected/cb159f40-08a1-4c27-9aa5-479f30ee1974-kube-api-access-qcjlb\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:07 crc kubenswrapper[4981]: I0227 19:16:07.187762 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536996-zdg5k" event={"ID":"cb159f40-08a1-4c27-9aa5-479f30ee1974","Type":"ContainerDied","Data":"afe82482c23781de2492037964636e48b21f19d40f1a70589031a5faf8fd4bf4"} Feb 27 19:16:07 crc kubenswrapper[4981]: I0227 19:16:07.187811 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afe82482c23781de2492037964636e48b21f19d40f1a70589031a5faf8fd4bf4" Feb 27 19:16:07 crc kubenswrapper[4981]: I0227 19:16:07.187876 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536996-zdg5k" Feb 27 19:16:07 crc kubenswrapper[4981]: I0227 19:16:07.237876 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536990-lrfs7"] Feb 27 19:16:07 crc kubenswrapper[4981]: I0227 19:16:07.246691 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536990-lrfs7"] Feb 27 19:16:07 crc kubenswrapper[4981]: I0227 19:16:07.647736 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eac9845d-e69d-4927-93c6-ec79af3de438" path="/var/lib/kubelet/pods/eac9845d-e69d-4927-93c6-ec79af3de438/volumes" Feb 27 19:16:08 crc kubenswrapper[4981]: I0227 19:16:08.957568 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:16:11 crc kubenswrapper[4981]: I0227 19:16:11.120650 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:16:11 crc kubenswrapper[4981]: I0227 19:16:11.635928 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:16:11 crc kubenswrapper[4981]: E0227 19:16:11.636349 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:16:13 crc kubenswrapper[4981]: I0227 19:16:13.934376 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 27 19:16:13 crc kubenswrapper[4981]: E0227 19:16:13.935345 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb159f40-08a1-4c27-9aa5-479f30ee1974" containerName="oc" Feb 27 19:16:13 crc kubenswrapper[4981]: I0227 19:16:13.935368 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb159f40-08a1-4c27-9aa5-479f30ee1974" containerName="oc" Feb 27 19:16:13 crc kubenswrapper[4981]: I0227 19:16:13.935594 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb159f40-08a1-4c27-9aa5-479f30ee1974" containerName="oc" Feb 27 19:16:13 crc kubenswrapper[4981]: I0227 19:16:13.936845 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 19:16:13 crc kubenswrapper[4981]: I0227 19:16:13.939906 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 27 19:16:13 crc kubenswrapper[4981]: I0227 19:16:13.940100 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-bzc9s" Feb 27 19:16:13 crc kubenswrapper[4981]: I0227 19:16:13.943395 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 27 19:16:13 crc kubenswrapper[4981]: I0227 19:16:13.948427 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.096289 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4b0b8ee2-b825-496c-a03d-b96c1047689a-openstack-config\") pod \"openstackclient\" (UID: \"4b0b8ee2-b825-496c-a03d-b96c1047689a\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.096386 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4b0b8ee2-b825-496c-a03d-b96c1047689a-openstack-config-secret\") pod \"openstackclient\" (UID: \"4b0b8ee2-b825-496c-a03d-b96c1047689a\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.096406 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b0b8ee2-b825-496c-a03d-b96c1047689a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4b0b8ee2-b825-496c-a03d-b96c1047689a\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.096524 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c297q\" (UniqueName: \"kubernetes.io/projected/4b0b8ee2-b825-496c-a03d-b96c1047689a-kube-api-access-c297q\") pod \"openstackclient\" (UID: \"4b0b8ee2-b825-496c-a03d-b96c1047689a\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.198638 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c297q\" (UniqueName: \"kubernetes.io/projected/4b0b8ee2-b825-496c-a03d-b96c1047689a-kube-api-access-c297q\") pod \"openstackclient\" (UID: \"4b0b8ee2-b825-496c-a03d-b96c1047689a\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.198771 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4b0b8ee2-b825-496c-a03d-b96c1047689a-openstack-config\") pod \"openstackclient\" (UID: \"4b0b8ee2-b825-496c-a03d-b96c1047689a\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.198836 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4b0b8ee2-b825-496c-a03d-b96c1047689a-openstack-config-secret\") pod \"openstackclient\" (UID: \"4b0b8ee2-b825-496c-a03d-b96c1047689a\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.198860 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b0b8ee2-b825-496c-a03d-b96c1047689a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4b0b8ee2-b825-496c-a03d-b96c1047689a\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.199966 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4b0b8ee2-b825-496c-a03d-b96c1047689a-openstack-config\") pod \"openstackclient\" (UID: \"4b0b8ee2-b825-496c-a03d-b96c1047689a\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.206577 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b0b8ee2-b825-496c-a03d-b96c1047689a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4b0b8ee2-b825-496c-a03d-b96c1047689a\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.206817 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4b0b8ee2-b825-496c-a03d-b96c1047689a-openstack-config-secret\") pod \"openstackclient\" (UID: \"4b0b8ee2-b825-496c-a03d-b96c1047689a\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.218776 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c297q\" (UniqueName: \"kubernetes.io/projected/4b0b8ee2-b825-496c-a03d-b96c1047689a-kube-api-access-c297q\") pod \"openstackclient\" (UID: \"4b0b8ee2-b825-496c-a03d-b96c1047689a\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.254480 4981 generic.go:334] "Generic (PLEG): container finished" podID="8c06b80b-18d6-4fef-a1ce-2d513e9b58e6" containerID="cce6d6c81c960fa6f31644866b722a77872ba205b67bf733c80144cc6d6e3dfc" exitCode=0 Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.254533 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8k42j" event={"ID":"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6","Type":"ContainerDied","Data":"cce6d6c81c960fa6f31644866b722a77872ba205b67bf733c80144cc6d6e3dfc"} Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.263262 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.350147 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.365931 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.403711 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.406019 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.421152 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 27 19:16:14 crc kubenswrapper[4981]: E0227 19:16:14.488986 4981 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 27 19:16:14 crc kubenswrapper[4981]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_4b0b8ee2-b825-496c-a03d-b96c1047689a_0(c219a10ac6c41a3153c6c3ee210131a3bfb7047ab112fe946cbd494ca95cd5e4): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c219a10ac6c41a3153c6c3ee210131a3bfb7047ab112fe946cbd494ca95cd5e4" Netns:"/var/run/netns/983ac927-afb9-4dcb-b36c-89efd2e70469" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=c219a10ac6c41a3153c6c3ee210131a3bfb7047ab112fe946cbd494ca95cd5e4;K8S_POD_UID=4b0b8ee2-b825-496c-a03d-b96c1047689a" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/4b0b8ee2-b825-496c-a03d-b96c1047689a]: expected pod UID "4b0b8ee2-b825-496c-a03d-b96c1047689a" but got "6047b4ff-4778-43fd-8d8e-c84b76ff271e" from Kube API Feb 27 19:16:14 crc kubenswrapper[4981]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 19:16:14 crc kubenswrapper[4981]: > Feb 27 19:16:14 crc kubenswrapper[4981]: E0227 19:16:14.489123 4981 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 27 19:16:14 crc kubenswrapper[4981]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_4b0b8ee2-b825-496c-a03d-b96c1047689a_0(c219a10ac6c41a3153c6c3ee210131a3bfb7047ab112fe946cbd494ca95cd5e4): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c219a10ac6c41a3153c6c3ee210131a3bfb7047ab112fe946cbd494ca95cd5e4" Netns:"/var/run/netns/983ac927-afb9-4dcb-b36c-89efd2e70469" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=c219a10ac6c41a3153c6c3ee210131a3bfb7047ab112fe946cbd494ca95cd5e4;K8S_POD_UID=4b0b8ee2-b825-496c-a03d-b96c1047689a" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/4b0b8ee2-b825-496c-a03d-b96c1047689a]: expected pod UID "4b0b8ee2-b825-496c-a03d-b96c1047689a" but got "6047b4ff-4778-43fd-8d8e-c84b76ff271e" from Kube API Feb 27 19:16:14 crc kubenswrapper[4981]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 27 19:16:14 crc kubenswrapper[4981]: > pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.503897 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6047b4ff-4778-43fd-8d8e-c84b76ff271e-openstack-config-secret\") pod \"openstackclient\" (UID: \"6047b4ff-4778-43fd-8d8e-c84b76ff271e\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.503988 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6047b4ff-4778-43fd-8d8e-c84b76ff271e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6047b4ff-4778-43fd-8d8e-c84b76ff271e\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.504037 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6047b4ff-4778-43fd-8d8e-c84b76ff271e-openstack-config\") pod \"openstackclient\" (UID: \"6047b4ff-4778-43fd-8d8e-c84b76ff271e\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.504080 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvhc7\" (UniqueName: \"kubernetes.io/projected/6047b4ff-4778-43fd-8d8e-c84b76ff271e-kube-api-access-fvhc7\") pod \"openstackclient\" (UID: \"6047b4ff-4778-43fd-8d8e-c84b76ff271e\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.606156 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvhc7\" (UniqueName: \"kubernetes.io/projected/6047b4ff-4778-43fd-8d8e-c84b76ff271e-kube-api-access-fvhc7\") pod \"openstackclient\" (UID: \"6047b4ff-4778-43fd-8d8e-c84b76ff271e\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.606266 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6047b4ff-4778-43fd-8d8e-c84b76ff271e-openstack-config-secret\") pod \"openstackclient\" (UID: \"6047b4ff-4778-43fd-8d8e-c84b76ff271e\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.606374 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6047b4ff-4778-43fd-8d8e-c84b76ff271e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6047b4ff-4778-43fd-8d8e-c84b76ff271e\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.606451 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6047b4ff-4778-43fd-8d8e-c84b76ff271e-openstack-config\") pod \"openstackclient\" (UID: \"6047b4ff-4778-43fd-8d8e-c84b76ff271e\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.607521 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6047b4ff-4778-43fd-8d8e-c84b76ff271e-openstack-config\") pod \"openstackclient\" (UID: \"6047b4ff-4778-43fd-8d8e-c84b76ff271e\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.611698 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6047b4ff-4778-43fd-8d8e-c84b76ff271e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6047b4ff-4778-43fd-8d8e-c84b76ff271e\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.612132 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6047b4ff-4778-43fd-8d8e-c84b76ff271e-openstack-config-secret\") pod \"openstackclient\" (UID: \"6047b4ff-4778-43fd-8d8e-c84b76ff271e\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.626373 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvhc7\" (UniqueName: \"kubernetes.io/projected/6047b4ff-4778-43fd-8d8e-c84b76ff271e-kube-api-access-fvhc7\") pod \"openstackclient\" (UID: \"6047b4ff-4778-43fd-8d8e-c84b76ff271e\") " pod="openstack/openstackclient" Feb 27 19:16:14 crc kubenswrapper[4981]: I0227 19:16:14.744367 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.227722 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.264352 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6047b4ff-4778-43fd-8d8e-c84b76ff271e","Type":"ContainerStarted","Data":"7251dd1752053681fe51883989f3f780a29ab6c1427ca398f4e2626ca195e0c9"} Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.264420 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.268554 4981 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="4b0b8ee2-b825-496c-a03d-b96c1047689a" podUID="6047b4ff-4778-43fd-8d8e-c84b76ff271e" Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.310102 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.441601 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4b0b8ee2-b825-496c-a03d-b96c1047689a-openstack-config\") pod \"4b0b8ee2-b825-496c-a03d-b96c1047689a\" (UID: \"4b0b8ee2-b825-496c-a03d-b96c1047689a\") " Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.441784 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b0b8ee2-b825-496c-a03d-b96c1047689a-combined-ca-bundle\") pod \"4b0b8ee2-b825-496c-a03d-b96c1047689a\" (UID: \"4b0b8ee2-b825-496c-a03d-b96c1047689a\") " Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.441908 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c297q\" (UniqueName: \"kubernetes.io/projected/4b0b8ee2-b825-496c-a03d-b96c1047689a-kube-api-access-c297q\") pod \"4b0b8ee2-b825-496c-a03d-b96c1047689a\" (UID: \"4b0b8ee2-b825-496c-a03d-b96c1047689a\") " Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.441951 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4b0b8ee2-b825-496c-a03d-b96c1047689a-openstack-config-secret\") pod \"4b0b8ee2-b825-496c-a03d-b96c1047689a\" (UID: \"4b0b8ee2-b825-496c-a03d-b96c1047689a\") " Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.443225 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b0b8ee2-b825-496c-a03d-b96c1047689a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "4b0b8ee2-b825-496c-a03d-b96c1047689a" (UID: "4b0b8ee2-b825-496c-a03d-b96c1047689a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.474663 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b0b8ee2-b825-496c-a03d-b96c1047689a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b0b8ee2-b825-496c-a03d-b96c1047689a" (UID: "4b0b8ee2-b825-496c-a03d-b96c1047689a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.480450 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b0b8ee2-b825-496c-a03d-b96c1047689a-kube-api-access-c297q" (OuterVolumeSpecName: "kube-api-access-c297q") pod "4b0b8ee2-b825-496c-a03d-b96c1047689a" (UID: "4b0b8ee2-b825-496c-a03d-b96c1047689a"). InnerVolumeSpecName "kube-api-access-c297q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.481304 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b0b8ee2-b825-496c-a03d-b96c1047689a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "4b0b8ee2-b825-496c-a03d-b96c1047689a" (UID: "4b0b8ee2-b825-496c-a03d-b96c1047689a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.545223 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b0b8ee2-b825-496c-a03d-b96c1047689a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.545264 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c297q\" (UniqueName: \"kubernetes.io/projected/4b0b8ee2-b825-496c-a03d-b96c1047689a-kube-api-access-c297q\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.545276 4981 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4b0b8ee2-b825-496c-a03d-b96c1047689a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.545284 4981 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4b0b8ee2-b825-496c-a03d-b96c1047689a-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.654543 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b0b8ee2-b825-496c-a03d-b96c1047689a" path="/var/lib/kubelet/pods/4b0b8ee2-b825-496c-a03d-b96c1047689a/volumes" Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.778722 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8k42j" Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.953650 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-logs\") pod \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\" (UID: \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\") " Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.953751 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-scripts\") pod \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\" (UID: \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\") " Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.953805 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw7qk\" (UniqueName: \"kubernetes.io/projected/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-kube-api-access-xw7qk\") pod \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\" (UID: \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\") " Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.953862 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-combined-ca-bundle\") pod \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\" (UID: \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\") " Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.953945 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-config-data\") pod \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\" (UID: \"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6\") " Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.954330 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-logs" (OuterVolumeSpecName: "logs") pod "8c06b80b-18d6-4fef-a1ce-2d513e9b58e6" (UID: "8c06b80b-18d6-4fef-a1ce-2d513e9b58e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.954539 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.958731 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-kube-api-access-xw7qk" (OuterVolumeSpecName: "kube-api-access-xw7qk") pod "8c06b80b-18d6-4fef-a1ce-2d513e9b58e6" (UID: "8c06b80b-18d6-4fef-a1ce-2d513e9b58e6"). InnerVolumeSpecName "kube-api-access-xw7qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.958832 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-scripts" (OuterVolumeSpecName: "scripts") pod "8c06b80b-18d6-4fef-a1ce-2d513e9b58e6" (UID: "8c06b80b-18d6-4fef-a1ce-2d513e9b58e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.983938 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-config-data" (OuterVolumeSpecName: "config-data") pod "8c06b80b-18d6-4fef-a1ce-2d513e9b58e6" (UID: "8c06b80b-18d6-4fef-a1ce-2d513e9b58e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:15 crc kubenswrapper[4981]: I0227 19:16:15.983966 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c06b80b-18d6-4fef-a1ce-2d513e9b58e6" (UID: "8c06b80b-18d6-4fef-a1ce-2d513e9b58e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.056477 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.056516 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw7qk\" (UniqueName: \"kubernetes.io/projected/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-kube-api-access-xw7qk\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.056531 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.056544 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.275663 4981 generic.go:334] "Generic (PLEG): container finished" podID="f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c" containerID="94f7cad0d48ab4cdb999663cdeab7da040c74451f9b64c26617c577f369d2053" exitCode=0 Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.275750 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-68687" event={"ID":"f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c","Type":"ContainerDied","Data":"94f7cad0d48ab4cdb999663cdeab7da040c74451f9b64c26617c577f369d2053"} Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.278698 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.279213 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8k42j" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.279241 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8k42j" event={"ID":"8c06b80b-18d6-4fef-a1ce-2d513e9b58e6","Type":"ContainerDied","Data":"6250adedbd10da3f42ff07713d26075e0ea19b539fe253e03892c890e0ac7dff"} Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.279278 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6250adedbd10da3f42ff07713d26075e0ea19b539fe253e03892c890e0ac7dff" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.302166 4981 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="4b0b8ee2-b825-496c-a03d-b96c1047689a" podUID="6047b4ff-4778-43fd-8d8e-c84b76ff271e" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.371770 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6f8d597b78-f58nv"] Feb 27 19:16:16 crc kubenswrapper[4981]: E0227 19:16:16.372169 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c06b80b-18d6-4fef-a1ce-2d513e9b58e6" containerName="placement-db-sync" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.372186 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c06b80b-18d6-4fef-a1ce-2d513e9b58e6" containerName="placement-db-sync" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.372384 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c06b80b-18d6-4fef-a1ce-2d513e9b58e6" containerName="placement-db-sync" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.373236 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.379835 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.380149 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.380300 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hg7bb" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.380542 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.380655 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.389518 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f8d597b78-f58nv"] Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.566084 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-logs\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.566532 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-scripts\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.566566 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-public-tls-certs\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.566620 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pvsv\" (UniqueName: \"kubernetes.io/projected/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-kube-api-access-7pvsv\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.566717 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-config-data\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.566840 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-internal-tls-certs\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.567016 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-combined-ca-bundle\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.668281 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-logs\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.668331 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-scripts\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.668356 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-public-tls-certs\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.668397 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pvsv\" (UniqueName: \"kubernetes.io/projected/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-kube-api-access-7pvsv\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.668413 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-config-data\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.668444 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-internal-tls-certs\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.668490 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-combined-ca-bundle\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.669169 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-logs\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.673740 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-combined-ca-bundle\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.675855 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-config-data\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.677537 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-internal-tls-certs\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.679606 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-scripts\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.681721 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-public-tls-certs\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.687461 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pvsv\" (UniqueName: \"kubernetes.io/projected/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-kube-api-access-7pvsv\") pod \"placement-6f8d597b78-f58nv\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:16 crc kubenswrapper[4981]: I0227 19:16:16.698154 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.176659 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f8d597b78-f58nv"] Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.289988 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f8d597b78-f58nv" event={"ID":"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2","Type":"ContainerStarted","Data":"fcaf84ff7ea507da5a5af79d58eee42b4e9ec5f09fd405e17f59181df9902115"} Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.323105 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-fd6854db9-vlzhb"] Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.325370 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.330873 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.331339 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.331502 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.375406 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-fd6854db9-vlzhb"] Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.486456 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4885\" (UniqueName: \"kubernetes.io/projected/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-kube-api-access-x4885\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.486903 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-log-httpd\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.486934 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-combined-ca-bundle\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.487026 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-etc-swift\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.487096 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-config-data\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.487147 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-public-tls-certs\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.487184 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-run-httpd\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.487206 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-internal-tls-certs\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.588525 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-log-httpd\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.588592 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-combined-ca-bundle\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.588645 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-etc-swift\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.588684 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-config-data\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.588731 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-public-tls-certs\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.588763 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-run-httpd\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.588787 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-internal-tls-certs\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.588872 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4885\" (UniqueName: \"kubernetes.io/projected/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-kube-api-access-x4885\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.589559 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-log-httpd\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.590491 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-run-httpd\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.596835 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-public-tls-certs\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.596984 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-config-data\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.599730 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-combined-ca-bundle\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.599993 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-internal-tls-certs\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.601558 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-etc-swift\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.612483 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4885\" (UniqueName: \"kubernetes.io/projected/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-kube-api-access-x4885\") pod \"swift-proxy-fd6854db9-vlzhb\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.662547 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-68687" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.672464 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.690111 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c-db-sync-config-data\") pod \"f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c\" (UID: \"f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c\") " Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.690207 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6np2\" (UniqueName: \"kubernetes.io/projected/f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c-kube-api-access-l6np2\") pod \"f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c\" (UID: \"f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c\") " Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.690232 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c-combined-ca-bundle\") pod \"f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c\" (UID: \"f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c\") " Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.694584 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c-kube-api-access-l6np2" (OuterVolumeSpecName: "kube-api-access-l6np2") pod "f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c" (UID: "f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c"). InnerVolumeSpecName "kube-api-access-l6np2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.698872 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c" (UID: "f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.720322 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c" (UID: "f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.792495 4981 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.792844 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6np2\" (UniqueName: \"kubernetes.io/projected/f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c-kube-api-access-l6np2\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:17 crc kubenswrapper[4981]: I0227 19:16:17.792857 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.303697 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-68687" event={"ID":"f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c","Type":"ContainerDied","Data":"18ef712d734fbb944b4cdb7e21e3abed785221f0363bdf4e7b9f68be74270c46"} Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.304075 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18ef712d734fbb944b4cdb7e21e3abed785221f0363bdf4e7b9f68be74270c46" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.304140 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-68687" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.312371 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f8d597b78-f58nv" event={"ID":"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2","Type":"ContainerStarted","Data":"da47666533c186d6e31e8632cdd467e851243fc49eff7f7fcac48865f970ee5b"} Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.333339 4981 scope.go:117] "RemoveContainer" containerID="ef59041208e1d70019ae21b38567b2c6f87762c9e6b73a7b36517b78f750ad6a" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.409036 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.588219 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-649cdc5f7c-t45d9"] Feb 27 19:16:18 crc kubenswrapper[4981]: E0227 19:16:18.589147 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c" containerName="barbican-db-sync" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.589258 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c" containerName="barbican-db-sync" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.589587 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c" containerName="barbican-db-sync" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.590855 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-649cdc5f7c-t45d9" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.597851 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.598105 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.598730 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9995n" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.642260 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-649cdc5f7c-t45d9"] Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.677141 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm"] Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.679115 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.683819 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.714578 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b5819ab-18f7-4885-a4b9-a6a3401903a1-config-data-custom\") pod \"barbican-worker-649cdc5f7c-t45d9\" (UID: \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\") " pod="openstack/barbican-worker-649cdc5f7c-t45d9" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.714749 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b5819ab-18f7-4885-a4b9-a6a3401903a1-combined-ca-bundle\") pod \"barbican-worker-649cdc5f7c-t45d9\" (UID: \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\") " pod="openstack/barbican-worker-649cdc5f7c-t45d9" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.714844 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/392b1bc3-d461-4cc5-8d63-64922c6c3d04-config-data\") pod \"barbican-keystone-listener-69d4bd5f7d-zs8qm\" (UID: \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\") " pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.714938 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/392b1bc3-d461-4cc5-8d63-64922c6c3d04-config-data-custom\") pod \"barbican-keystone-listener-69d4bd5f7d-zs8qm\" (UID: \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\") " pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.715025 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/392b1bc3-d461-4cc5-8d63-64922c6c3d04-logs\") pod \"barbican-keystone-listener-69d4bd5f7d-zs8qm\" (UID: \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\") " pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.715159 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b5819ab-18f7-4885-a4b9-a6a3401903a1-config-data\") pod \"barbican-worker-649cdc5f7c-t45d9\" (UID: \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\") " pod="openstack/barbican-worker-649cdc5f7c-t45d9" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.715310 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392b1bc3-d461-4cc5-8d63-64922c6c3d04-combined-ca-bundle\") pod \"barbican-keystone-listener-69d4bd5f7d-zs8qm\" (UID: \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\") " pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.715493 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b5819ab-18f7-4885-a4b9-a6a3401903a1-logs\") pod \"barbican-worker-649cdc5f7c-t45d9\" (UID: \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\") " pod="openstack/barbican-worker-649cdc5f7c-t45d9" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.715630 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b45ct\" (UniqueName: \"kubernetes.io/projected/392b1bc3-d461-4cc5-8d63-64922c6c3d04-kube-api-access-b45ct\") pod \"barbican-keystone-listener-69d4bd5f7d-zs8qm\" (UID: \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\") " pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.715864 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5htrg\" (UniqueName: \"kubernetes.io/projected/0b5819ab-18f7-4885-a4b9-a6a3401903a1-kube-api-access-5htrg\") pod \"barbican-worker-649cdc5f7c-t45d9\" (UID: \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\") " pod="openstack/barbican-worker-649cdc5f7c-t45d9" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.734866 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm"] Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.755148 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58957f86ff-sn4jh"] Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.756666 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.764613 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58957f86ff-sn4jh"] Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.819577 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-dns-swift-storage-0\") pod \"dnsmasq-dns-58957f86ff-sn4jh\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.840242 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5htrg\" (UniqueName: \"kubernetes.io/projected/0b5819ab-18f7-4885-a4b9-a6a3401903a1-kube-api-access-5htrg\") pod \"barbican-worker-649cdc5f7c-t45d9\" (UID: \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\") " pod="openstack/barbican-worker-649cdc5f7c-t45d9" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.840595 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bv85\" (UniqueName: \"kubernetes.io/projected/6c81e863-b29e-405f-b9e5-9979b695bcd2-kube-api-access-8bv85\") pod \"dnsmasq-dns-58957f86ff-sn4jh\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.840739 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b5819ab-18f7-4885-a4b9-a6a3401903a1-config-data-custom\") pod \"barbican-worker-649cdc5f7c-t45d9\" (UID: \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\") " pod="openstack/barbican-worker-649cdc5f7c-t45d9" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.842743 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b5819ab-18f7-4885-a4b9-a6a3401903a1-combined-ca-bundle\") pod \"barbican-worker-649cdc5f7c-t45d9\" (UID: \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\") " pod="openstack/barbican-worker-649cdc5f7c-t45d9" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.842844 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-config\") pod \"dnsmasq-dns-58957f86ff-sn4jh\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.842953 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/392b1bc3-d461-4cc5-8d63-64922c6c3d04-config-data\") pod \"barbican-keystone-listener-69d4bd5f7d-zs8qm\" (UID: \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\") " pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.843237 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/392b1bc3-d461-4cc5-8d63-64922c6c3d04-config-data-custom\") pod \"barbican-keystone-listener-69d4bd5f7d-zs8qm\" (UID: \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\") " pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.843339 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/392b1bc3-d461-4cc5-8d63-64922c6c3d04-logs\") pod \"barbican-keystone-listener-69d4bd5f7d-zs8qm\" (UID: \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\") " pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.843622 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b5819ab-18f7-4885-a4b9-a6a3401903a1-config-data\") pod \"barbican-worker-649cdc5f7c-t45d9\" (UID: \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\") " pod="openstack/barbican-worker-649cdc5f7c-t45d9" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.843702 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-ovsdbserver-sb\") pod \"dnsmasq-dns-58957f86ff-sn4jh\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.843844 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392b1bc3-d461-4cc5-8d63-64922c6c3d04-combined-ca-bundle\") pod \"barbican-keystone-listener-69d4bd5f7d-zs8qm\" (UID: \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\") " pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.843958 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-dns-svc\") pod \"dnsmasq-dns-58957f86ff-sn4jh\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.844072 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-ovsdbserver-nb\") pod \"dnsmasq-dns-58957f86ff-sn4jh\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.844170 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b5819ab-18f7-4885-a4b9-a6a3401903a1-logs\") pod \"barbican-worker-649cdc5f7c-t45d9\" (UID: \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\") " pod="openstack/barbican-worker-649cdc5f7c-t45d9" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.844260 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b45ct\" (UniqueName: \"kubernetes.io/projected/392b1bc3-d461-4cc5-8d63-64922c6c3d04-kube-api-access-b45ct\") pod \"barbican-keystone-listener-69d4bd5f7d-zs8qm\" (UID: \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\") " pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.847634 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/392b1bc3-d461-4cc5-8d63-64922c6c3d04-logs\") pod \"barbican-keystone-listener-69d4bd5f7d-zs8qm\" (UID: \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\") " pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.851139 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/392b1bc3-d461-4cc5-8d63-64922c6c3d04-config-data\") pod \"barbican-keystone-listener-69d4bd5f7d-zs8qm\" (UID: \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\") " pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.857370 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b5819ab-18f7-4885-a4b9-a6a3401903a1-combined-ca-bundle\") pod \"barbican-worker-649cdc5f7c-t45d9\" (UID: \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\") " pod="openstack/barbican-worker-649cdc5f7c-t45d9" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.857962 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392b1bc3-d461-4cc5-8d63-64922c6c3d04-combined-ca-bundle\") pod \"barbican-keystone-listener-69d4bd5f7d-zs8qm\" (UID: \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\") " pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.862753 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b5819ab-18f7-4885-a4b9-a6a3401903a1-logs\") pod \"barbican-worker-649cdc5f7c-t45d9\" (UID: \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\") " pod="openstack/barbican-worker-649cdc5f7c-t45d9" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.864350 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b5819ab-18f7-4885-a4b9-a6a3401903a1-config-data\") pod \"barbican-worker-649cdc5f7c-t45d9\" (UID: \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\") " pod="openstack/barbican-worker-649cdc5f7c-t45d9" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.897163 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5htrg\" (UniqueName: \"kubernetes.io/projected/0b5819ab-18f7-4885-a4b9-a6a3401903a1-kube-api-access-5htrg\") pod \"barbican-worker-649cdc5f7c-t45d9\" (UID: \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\") " pod="openstack/barbican-worker-649cdc5f7c-t45d9" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.897495 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/392b1bc3-d461-4cc5-8d63-64922c6c3d04-config-data-custom\") pod \"barbican-keystone-listener-69d4bd5f7d-zs8qm\" (UID: \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\") " pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.901813 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b5819ab-18f7-4885-a4b9-a6a3401903a1-config-data-custom\") pod \"barbican-worker-649cdc5f7c-t45d9\" (UID: \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\") " pod="openstack/barbican-worker-649cdc5f7c-t45d9" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.911148 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b45ct\" (UniqueName: \"kubernetes.io/projected/392b1bc3-d461-4cc5-8d63-64922c6c3d04-kube-api-access-b45ct\") pod \"barbican-keystone-listener-69d4bd5f7d-zs8qm\" (UID: \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\") " pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.918039 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-db6dc8fcb-c5pxk"] Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.922964 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.931823 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-db6dc8fcb-c5pxk"] Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.935325 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.936956 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-649cdc5f7c-t45d9" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.951310 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr4lk\" (UniqueName: \"kubernetes.io/projected/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-kube-api-access-sr4lk\") pod \"barbican-api-db6dc8fcb-c5pxk\" (UID: \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\") " pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.966557 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-ovsdbserver-sb\") pod \"dnsmasq-dns-58957f86ff-sn4jh\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.966647 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-config-data\") pod \"barbican-api-db6dc8fcb-c5pxk\" (UID: \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\") " pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.966702 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-config-data-custom\") pod \"barbican-api-db6dc8fcb-c5pxk\" (UID: \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\") " pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.966796 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-dns-svc\") pod \"dnsmasq-dns-58957f86ff-sn4jh\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.966828 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-ovsdbserver-nb\") pod \"dnsmasq-dns-58957f86ff-sn4jh\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.966876 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-combined-ca-bundle\") pod \"barbican-api-db6dc8fcb-c5pxk\" (UID: \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\") " pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.966902 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-logs\") pod \"barbican-api-db6dc8fcb-c5pxk\" (UID: \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\") " pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.966994 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-dns-swift-storage-0\") pod \"dnsmasq-dns-58957f86ff-sn4jh\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.967071 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bv85\" (UniqueName: \"kubernetes.io/projected/6c81e863-b29e-405f-b9e5-9979b695bcd2-kube-api-access-8bv85\") pod \"dnsmasq-dns-58957f86ff-sn4jh\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.967121 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-config\") pod \"dnsmasq-dns-58957f86ff-sn4jh\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.970682 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-config\") pod \"dnsmasq-dns-58957f86ff-sn4jh\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.972238 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-ovsdbserver-nb\") pod \"dnsmasq-dns-58957f86ff-sn4jh\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.972648 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-dns-swift-storage-0\") pod \"dnsmasq-dns-58957f86ff-sn4jh\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:18 crc kubenswrapper[4981]: I0227 19:16:18.991682 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bv85\" (UniqueName: \"kubernetes.io/projected/6c81e863-b29e-405f-b9e5-9979b695bcd2-kube-api-access-8bv85\") pod \"dnsmasq-dns-58957f86ff-sn4jh\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:18.999850 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-ovsdbserver-sb\") pod \"dnsmasq-dns-58957f86ff-sn4jh\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.011272 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-dns-svc\") pod \"dnsmasq-dns-58957f86ff-sn4jh\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.023888 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.069910 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr4lk\" (UniqueName: \"kubernetes.io/projected/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-kube-api-access-sr4lk\") pod \"barbican-api-db6dc8fcb-c5pxk\" (UID: \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\") " pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.069980 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-config-data\") pod \"barbican-api-db6dc8fcb-c5pxk\" (UID: \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\") " pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.070017 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-config-data-custom\") pod \"barbican-api-db6dc8fcb-c5pxk\" (UID: \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\") " pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.070083 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-combined-ca-bundle\") pod \"barbican-api-db6dc8fcb-c5pxk\" (UID: \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\") " pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.070103 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-logs\") pod \"barbican-api-db6dc8fcb-c5pxk\" (UID: \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\") " pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.070495 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-logs\") pod \"barbican-api-db6dc8fcb-c5pxk\" (UID: \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\") " pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.084180 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-combined-ca-bundle\") pod \"barbican-api-db6dc8fcb-c5pxk\" (UID: \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\") " pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.087866 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-config-data-custom\") pod \"barbican-api-db6dc8fcb-c5pxk\" (UID: \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\") " pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.088880 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-config-data\") pod \"barbican-api-db6dc8fcb-c5pxk\" (UID: \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\") " pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.090763 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.110246 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr4lk\" (UniqueName: \"kubernetes.io/projected/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-kube-api-access-sr4lk\") pod \"barbican-api-db6dc8fcb-c5pxk\" (UID: \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\") " pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.287541 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.340931 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f8d597b78-f58nv" event={"ID":"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2","Type":"ContainerStarted","Data":"dae852a53f7febec558df780ac57acd7d91cce2fba1b3b86d956c36653347faa"} Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.344989 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.345135 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.384747 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6f8d597b78-f58nv" podStartSLOduration=3.38472544 podStartE2EDuration="3.38472544s" podCreationTimestamp="2026-02-27 19:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:16:19.384023699 +0000 UTC m=+1878.862804859" watchObservedRunningTime="2026-02-27 19:16:19.38472544 +0000 UTC m=+1878.863506590" Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.534078 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-649cdc5f7c-t45d9"] Feb 27 19:16:19 crc kubenswrapper[4981]: W0227 19:16:19.549936 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b5819ab_18f7_4885_a4b9_a6a3401903a1.slice/crio-385b4de6e9976559662451b00686810be84c29adae75cd4fa55e324c82417251 WatchSource:0}: Error finding container 385b4de6e9976559662451b00686810be84c29adae75cd4fa55e324c82417251: Status 404 returned error can't find the container with id 385b4de6e9976559662451b00686810be84c29adae75cd4fa55e324c82417251 Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.698106 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm"] Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.741736 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-fd6854db9-vlzhb"] Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.763628 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58957f86ff-sn4jh"] Feb 27 19:16:19 crc kubenswrapper[4981]: W0227 19:16:19.769848 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c81e863_b29e_405f_b9e5_9979b695bcd2.slice/crio-884822a58fb862b78d69da125faa4f520977583af373a2cf0f14cf8e3a887cbd WatchSource:0}: Error finding container 884822a58fb862b78d69da125faa4f520977583af373a2cf0f14cf8e3a887cbd: Status 404 returned error can't find the container with id 884822a58fb862b78d69da125faa4f520977583af373a2cf0f14cf8e3a887cbd Feb 27 19:16:19 crc kubenswrapper[4981]: I0227 19:16:19.865156 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-db6dc8fcb-c5pxk"] Feb 27 19:16:20 crc kubenswrapper[4981]: I0227 19:16:20.353914 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-649cdc5f7c-t45d9" event={"ID":"0b5819ab-18f7-4885-a4b9-a6a3401903a1","Type":"ContainerStarted","Data":"385b4de6e9976559662451b00686810be84c29adae75cd4fa55e324c82417251"} Feb 27 19:16:20 crc kubenswrapper[4981]: I0227 19:16:20.356018 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-fd6854db9-vlzhb" event={"ID":"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f","Type":"ContainerStarted","Data":"952ca9fd397f06e97e6cb589cce8711001ad9b1917f1597f155cbdfe54ecd748"} Feb 27 19:16:20 crc kubenswrapper[4981]: I0227 19:16:20.356089 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-fd6854db9-vlzhb" event={"ID":"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f","Type":"ContainerStarted","Data":"5e437d37a4c480c66b446389b230275009eff9e24954231b4d34cdecae0b30f2"} Feb 27 19:16:20 crc kubenswrapper[4981]: I0227 19:16:20.358260 4981 generic.go:334] "Generic (PLEG): container finished" podID="6c81e863-b29e-405f-b9e5-9979b695bcd2" containerID="64c38a5cec993faf734fe7777709d31dde9058b94675cc0df3f02ae82b4071d8" exitCode=0 Feb 27 19:16:20 crc kubenswrapper[4981]: I0227 19:16:20.359139 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" event={"ID":"6c81e863-b29e-405f-b9e5-9979b695bcd2","Type":"ContainerDied","Data":"64c38a5cec993faf734fe7777709d31dde9058b94675cc0df3f02ae82b4071d8"} Feb 27 19:16:20 crc kubenswrapper[4981]: I0227 19:16:20.359159 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" event={"ID":"6c81e863-b29e-405f-b9e5-9979b695bcd2","Type":"ContainerStarted","Data":"884822a58fb862b78d69da125faa4f520977583af373a2cf0f14cf8e3a887cbd"} Feb 27 19:16:20 crc kubenswrapper[4981]: I0227 19:16:20.362082 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" event={"ID":"392b1bc3-d461-4cc5-8d63-64922c6c3d04","Type":"ContainerStarted","Data":"d7bae1ce00211aa45d99ba1434230ca85e9fa9ed26d123f5c1c87dd87f21813d"} Feb 27 19:16:20 crc kubenswrapper[4981]: I0227 19:16:20.368891 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-db6dc8fcb-c5pxk" event={"ID":"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9","Type":"ContainerStarted","Data":"8d637a38fd3283377e57fd4aee97428ac1a3acee6bddb828574c303a3880a1ae"} Feb 27 19:16:20 crc kubenswrapper[4981]: I0227 19:16:20.368943 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-db6dc8fcb-c5pxk" event={"ID":"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9","Type":"ContainerStarted","Data":"be55e92b6339fa1c9f5fb6e572fadc9ab75cb3f770f39a5444417bd4f28907c4"} Feb 27 19:16:20 crc kubenswrapper[4981]: I0227 19:16:20.371194 4981 generic.go:334] "Generic (PLEG): container finished" podID="094a0674-7bf9-4e18-9e70-8efed0ae3ac2" containerID="5bb3f4702c947998436733cc6cc2c2d7567ac9f3091ff61fd1bc793151cba664" exitCode=0 Feb 27 19:16:20 crc kubenswrapper[4981]: I0227 19:16:20.371297 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7h875" event={"ID":"094a0674-7bf9-4e18-9e70-8efed0ae3ac2","Type":"ContainerDied","Data":"5bb3f4702c947998436733cc6cc2c2d7567ac9f3091ff61fd1bc793151cba664"} Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.381333 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" event={"ID":"6c81e863-b29e-405f-b9e5-9979b695bcd2","Type":"ContainerStarted","Data":"aafa0bd1676bd36b8c5a721d7ee7f7e612b5640806af1569cadb7e4304fa1804"} Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.382390 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.385023 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-db6dc8fcb-c5pxk" event={"ID":"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9","Type":"ContainerStarted","Data":"2fbbc4dd27432812fea4b3b90085f81535a6ac71bf8986218925a46e0d890326"} Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.385124 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.385148 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.393298 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-fd6854db9-vlzhb" event={"ID":"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f","Type":"ContainerStarted","Data":"469a1e71362f66007e5f99a18ff696a214d1ad52159039dec19da2dbfa3d13ff"} Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.423925 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" podStartSLOduration=3.423901276 podStartE2EDuration="3.423901276s" podCreationTimestamp="2026-02-27 19:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:16:21.41732497 +0000 UTC m=+1880.896106130" watchObservedRunningTime="2026-02-27 19:16:21.423901276 +0000 UTC m=+1880.902682436" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.468228 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-db6dc8fcb-c5pxk" podStartSLOduration=3.468207308 podStartE2EDuration="3.468207308s" podCreationTimestamp="2026-02-27 19:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:16:21.452259583 +0000 UTC m=+1880.931040763" watchObservedRunningTime="2026-02-27 19:16:21.468207308 +0000 UTC m=+1880.946988468" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.487965 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-fd6854db9-vlzhb" podStartSLOduration=4.487929826 podStartE2EDuration="4.487929826s" podCreationTimestamp="2026-02-27 19:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:16:21.47195619 +0000 UTC m=+1880.950737350" watchObservedRunningTime="2026-02-27 19:16:21.487929826 +0000 UTC m=+1880.966710986" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.569744 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-76f488968b-rp6r2"] Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.571640 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.575868 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.576567 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.601350 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76f488968b-rp6r2"] Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.625401 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-public-tls-certs\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.625481 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmj9n\" (UniqueName: \"kubernetes.io/projected/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-kube-api-access-gmj9n\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.625562 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-logs\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.625613 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-combined-ca-bundle\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.625670 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-internal-tls-certs\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.625724 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-config-data\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.625751 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-config-data-custom\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.727198 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-logs\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.727250 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-combined-ca-bundle\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.727285 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-internal-tls-certs\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.727343 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-config-data\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.727373 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-config-data-custom\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.727436 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-public-tls-certs\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.727470 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmj9n\" (UniqueName: \"kubernetes.io/projected/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-kube-api-access-gmj9n\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.727788 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-logs\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.733125 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-public-tls-certs\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.733236 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-internal-tls-certs\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.733534 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-combined-ca-bundle\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.733704 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-config-data\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.737267 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-config-data-custom\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.752071 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmj9n\" (UniqueName: \"kubernetes.io/projected/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-kube-api-access-gmj9n\") pod \"barbican-api-76f488968b-rp6r2\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:21 crc kubenswrapper[4981]: I0227 19:16:21.898334 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:22 crc kubenswrapper[4981]: I0227 19:16:22.403322 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:22 crc kubenswrapper[4981]: I0227 19:16:22.403748 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:26 crc kubenswrapper[4981]: I0227 19:16:26.629465 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:16:26 crc kubenswrapper[4981]: E0227 19:16:26.630735 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.687689 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.690859 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.797280 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7h875" Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.884716 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-scripts\") pod \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.884834 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c74n5\" (UniqueName: \"kubernetes.io/projected/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-kube-api-access-c74n5\") pod \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.884861 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-config-data\") pod \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.885733 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-etc-machine-id\") pod \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.885876 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-db-sync-config-data\") pod \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.885910 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-combined-ca-bundle\") pod \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\" (UID: \"094a0674-7bf9-4e18-9e70-8efed0ae3ac2\") " Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.885820 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "094a0674-7bf9-4e18-9e70-8efed0ae3ac2" (UID: "094a0674-7bf9-4e18-9e70-8efed0ae3ac2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.894658 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-scripts" (OuterVolumeSpecName: "scripts") pod "094a0674-7bf9-4e18-9e70-8efed0ae3ac2" (UID: "094a0674-7bf9-4e18-9e70-8efed0ae3ac2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.894702 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-kube-api-access-c74n5" (OuterVolumeSpecName: "kube-api-access-c74n5") pod "094a0674-7bf9-4e18-9e70-8efed0ae3ac2" (UID: "094a0674-7bf9-4e18-9e70-8efed0ae3ac2"). InnerVolumeSpecName "kube-api-access-c74n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.894792 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "094a0674-7bf9-4e18-9e70-8efed0ae3ac2" (UID: "094a0674-7bf9-4e18-9e70-8efed0ae3ac2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.923308 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "094a0674-7bf9-4e18-9e70-8efed0ae3ac2" (UID: "094a0674-7bf9-4e18-9e70-8efed0ae3ac2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.934115 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-config-data" (OuterVolumeSpecName: "config-data") pod "094a0674-7bf9-4e18-9e70-8efed0ae3ac2" (UID: "094a0674-7bf9-4e18-9e70-8efed0ae3ac2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.988546 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c74n5\" (UniqueName: \"kubernetes.io/projected/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-kube-api-access-c74n5\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.988599 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.988614 4981 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.988629 4981 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.988641 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:27 crc kubenswrapper[4981]: I0227 19:16:27.988653 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/094a0674-7bf9-4e18-9e70-8efed0ae3ac2-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:28 crc kubenswrapper[4981]: I0227 19:16:28.469362 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7h875" Feb 27 19:16:28 crc kubenswrapper[4981]: I0227 19:16:28.469441 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7h875" event={"ID":"094a0674-7bf9-4e18-9e70-8efed0ae3ac2","Type":"ContainerDied","Data":"a91ce6bdd9f4d511c6269f4ba7c2ab85e4389ef76745b878522998318c1cd845"} Feb 27 19:16:28 crc kubenswrapper[4981]: I0227 19:16:28.469468 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a91ce6bdd9f4d511c6269f4ba7c2ab85e4389ef76745b878522998318c1cd845" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.094070 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.144170 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 19:16:29 crc kubenswrapper[4981]: E0227 19:16:29.144754 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="094a0674-7bf9-4e18-9e70-8efed0ae3ac2" containerName="cinder-db-sync" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.144782 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="094a0674-7bf9-4e18-9e70-8efed0ae3ac2" containerName="cinder-db-sync" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.145828 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="094a0674-7bf9-4e18-9e70-8efed0ae3ac2" containerName="cinder-db-sync" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.147145 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.150926 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.151199 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.151508 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.151949 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kq5lk" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.206233 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.225433 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-config-data\") pod \"cinder-scheduler-0\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " pod="openstack/cinder-scheduler-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.225477 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " pod="openstack/cinder-scheduler-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.225525 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kgw5\" (UniqueName: \"kubernetes.io/projected/2209696a-9590-49d4-b70f-3c86d1cc62f2-kube-api-access-9kgw5\") pod \"cinder-scheduler-0\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " pod="openstack/cinder-scheduler-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.225547 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " pod="openstack/cinder-scheduler-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.225574 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-scripts\") pod \"cinder-scheduler-0\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " pod="openstack/cinder-scheduler-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.225650 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2209696a-9590-49d4-b70f-3c86d1cc62f2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " pod="openstack/cinder-scheduler-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.240726 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-cwmgz"] Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.240987 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" podUID="ec10cd6d-3b96-4c6a-acea-af517d302163" containerName="dnsmasq-dns" containerID="cri-o://a85cb7ace08e7d452c06b0b251fb3dabcbb9ca5e92634e53fe105efb7f3278ed" gracePeriod=10 Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.300105 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bf4c8dd6c-shmsm"] Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.302579 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.312021 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bf4c8dd6c-shmsm"] Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.328362 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-scripts\") pod \"cinder-scheduler-0\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " pod="openstack/cinder-scheduler-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.328469 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2209696a-9590-49d4-b70f-3c86d1cc62f2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " pod="openstack/cinder-scheduler-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.328556 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-config-data\") pod \"cinder-scheduler-0\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " pod="openstack/cinder-scheduler-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.328573 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " pod="openstack/cinder-scheduler-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.328602 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kgw5\" (UniqueName: \"kubernetes.io/projected/2209696a-9590-49d4-b70f-3c86d1cc62f2-kube-api-access-9kgw5\") pod \"cinder-scheduler-0\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " pod="openstack/cinder-scheduler-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.328624 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " pod="openstack/cinder-scheduler-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.342135 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2209696a-9590-49d4-b70f-3c86d1cc62f2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " pod="openstack/cinder-scheduler-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.343671 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " pod="openstack/cinder-scheduler-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.344122 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-config-data\") pod \"cinder-scheduler-0\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " pod="openstack/cinder-scheduler-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.351873 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-scripts\") pod \"cinder-scheduler-0\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " pod="openstack/cinder-scheduler-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.359910 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kgw5\" (UniqueName: \"kubernetes.io/projected/2209696a-9590-49d4-b70f-3c86d1cc62f2-kube-api-access-9kgw5\") pod \"cinder-scheduler-0\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " pod="openstack/cinder-scheduler-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.371286 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " pod="openstack/cinder-scheduler-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.430629 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ldxm\" (UniqueName: \"kubernetes.io/projected/2d4e8b93-9791-4889-8e5f-5a3938235441-kube-api-access-6ldxm\") pod \"dnsmasq-dns-7bf4c8dd6c-shmsm\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.430721 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-config\") pod \"dnsmasq-dns-7bf4c8dd6c-shmsm\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.430773 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-ovsdbserver-sb\") pod \"dnsmasq-dns-7bf4c8dd6c-shmsm\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.430852 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-ovsdbserver-nb\") pod \"dnsmasq-dns-7bf4c8dd6c-shmsm\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.430899 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-dns-swift-storage-0\") pod \"dnsmasq-dns-7bf4c8dd6c-shmsm\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.430929 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-dns-svc\") pod \"dnsmasq-dns-7bf4c8dd6c-shmsm\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.452596 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.454396 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.457323 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.479473 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.500148 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.534728 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.534798 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-dns-svc\") pod \"dnsmasq-dns-7bf4c8dd6c-shmsm\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.534835 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-config-data\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.534871 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnwhh\" (UniqueName: \"kubernetes.io/projected/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-kube-api-access-fnwhh\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.534903 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ldxm\" (UniqueName: \"kubernetes.io/projected/2d4e8b93-9791-4889-8e5f-5a3938235441-kube-api-access-6ldxm\") pod \"dnsmasq-dns-7bf4c8dd6c-shmsm\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.534948 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-scripts\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.534979 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-config-data-custom\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.535008 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-config\") pod \"dnsmasq-dns-7bf4c8dd6c-shmsm\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.535040 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-logs\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.535117 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.535175 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-ovsdbserver-sb\") pod \"dnsmasq-dns-7bf4c8dd6c-shmsm\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.536535 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-config\") pod \"dnsmasq-dns-7bf4c8dd6c-shmsm\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.536634 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-ovsdbserver-nb\") pod \"dnsmasq-dns-7bf4c8dd6c-shmsm\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.536742 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-dns-swift-storage-0\") pod \"dnsmasq-dns-7bf4c8dd6c-shmsm\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.537471 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-dns-swift-storage-0\") pod \"dnsmasq-dns-7bf4c8dd6c-shmsm\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.537848 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-dns-svc\") pod \"dnsmasq-dns-7bf4c8dd6c-shmsm\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.537979 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-ovsdbserver-nb\") pod \"dnsmasq-dns-7bf4c8dd6c-shmsm\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.539962 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-ovsdbserver-sb\") pod \"dnsmasq-dns-7bf4c8dd6c-shmsm\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.557763 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ldxm\" (UniqueName: \"kubernetes.io/projected/2d4e8b93-9791-4889-8e5f-5a3938235441-kube-api-access-6ldxm\") pod \"dnsmasq-dns-7bf4c8dd6c-shmsm\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.636034 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.638191 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-config-data-custom\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.638259 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-logs\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.638295 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.638449 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.638489 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-config-data\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.638524 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnwhh\" (UniqueName: \"kubernetes.io/projected/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-kube-api-access-fnwhh\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.638592 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-scripts\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.638703 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.643700 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-logs\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.645748 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-scripts\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.645961 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-config-data-custom\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.646534 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-config-data\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.653275 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.671180 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnwhh\" (UniqueName: \"kubernetes.io/projected/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-kube-api-access-fnwhh\") pod \"cinder-api-0\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.786695 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.839940 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.840241 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" containerName="ceilometer-central-agent" containerID="cri-o://5fab91688329874bd31d1532c5de0a3d1f511968acfffc988f1258be6405b7f5" gracePeriod=30 Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.840392 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" containerName="sg-core" containerID="cri-o://854bb8d7ddb95bc9d55fff3d29b6f314411516fe37426da7367f570ec25b783f" gracePeriod=30 Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.840544 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" containerName="ceilometer-notification-agent" containerID="cri-o://eab0c9776977ec234e7403e6cd9f94b0ef6ead48601bb5a63aaa5a6bdb248954" gracePeriod=30 Feb 27 19:16:29 crc kubenswrapper[4981]: I0227 19:16:29.840649 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" containerName="proxy-httpd" containerID="cri-o://060cc35cfd54b06974b6edfe87c00a5ecd4952b42c0c01437d0e2bce0da0c253" gracePeriod=30 Feb 27 19:16:30 crc kubenswrapper[4981]: I0227 19:16:30.504567 4981 generic.go:334] "Generic (PLEG): container finished" podID="ec10cd6d-3b96-4c6a-acea-af517d302163" containerID="a85cb7ace08e7d452c06b0b251fb3dabcbb9ca5e92634e53fe105efb7f3278ed" exitCode=0 Feb 27 19:16:30 crc kubenswrapper[4981]: I0227 19:16:30.504649 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" event={"ID":"ec10cd6d-3b96-4c6a-acea-af517d302163","Type":"ContainerDied","Data":"a85cb7ace08e7d452c06b0b251fb3dabcbb9ca5e92634e53fe105efb7f3278ed"} Feb 27 19:16:30 crc kubenswrapper[4981]: I0227 19:16:30.510192 4981 generic.go:334] "Generic (PLEG): container finished" podID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" containerID="060cc35cfd54b06974b6edfe87c00a5ecd4952b42c0c01437d0e2bce0da0c253" exitCode=0 Feb 27 19:16:30 crc kubenswrapper[4981]: I0227 19:16:30.510234 4981 generic.go:334] "Generic (PLEG): container finished" podID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" containerID="854bb8d7ddb95bc9d55fff3d29b6f314411516fe37426da7367f570ec25b783f" exitCode=2 Feb 27 19:16:30 crc kubenswrapper[4981]: I0227 19:16:30.510247 4981 generic.go:334] "Generic (PLEG): container finished" podID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" containerID="5fab91688329874bd31d1532c5de0a3d1f511968acfffc988f1258be6405b7f5" exitCode=0 Feb 27 19:16:30 crc kubenswrapper[4981]: I0227 19:16:30.510275 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e652cdc7-a577-4e73-99da-01c7fc2c45f9","Type":"ContainerDied","Data":"060cc35cfd54b06974b6edfe87c00a5ecd4952b42c0c01437d0e2bce0da0c253"} Feb 27 19:16:30 crc kubenswrapper[4981]: I0227 19:16:30.510327 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e652cdc7-a577-4e73-99da-01c7fc2c45f9","Type":"ContainerDied","Data":"854bb8d7ddb95bc9d55fff3d29b6f314411516fe37426da7367f570ec25b783f"} Feb 27 19:16:30 crc kubenswrapper[4981]: I0227 19:16:30.510337 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e652cdc7-a577-4e73-99da-01c7fc2c45f9","Type":"ContainerDied","Data":"5fab91688329874bd31d1532c5de0a3d1f511968acfffc988f1258be6405b7f5"} Feb 27 19:16:30 crc kubenswrapper[4981]: I0227 19:16:30.645804 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" podUID="ec10cd6d-3b96-4c6a-acea-af517d302163" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: connect: connection refused" Feb 27 19:16:31 crc kubenswrapper[4981]: I0227 19:16:31.193107 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:31 crc kubenswrapper[4981]: I0227 19:16:31.230819 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:31 crc kubenswrapper[4981]: I0227 19:16:31.783421 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 27 19:16:35 crc kubenswrapper[4981]: I0227 19:16:35.646927 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" podUID="ec10cd6d-3b96-4c6a-acea-af517d302163" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: connect: connection refused" Feb 27 19:16:39 crc kubenswrapper[4981]: I0227 19:16:39.601037 4981 generic.go:334] "Generic (PLEG): container finished" podID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" containerID="eab0c9776977ec234e7403e6cd9f94b0ef6ead48601bb5a63aaa5a6bdb248954" exitCode=0 Feb 27 19:16:39 crc kubenswrapper[4981]: I0227 19:16:39.601138 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e652cdc7-a577-4e73-99da-01c7fc2c45f9","Type":"ContainerDied","Data":"eab0c9776977ec234e7403e6cd9f94b0ef6ead48601bb5a63aaa5a6bdb248954"} Feb 27 19:16:39 crc kubenswrapper[4981]: I0227 19:16:39.628988 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:16:39 crc kubenswrapper[4981]: E0227 19:16:39.629740 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:16:40 crc kubenswrapper[4981]: I0227 19:16:40.646345 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" podUID="ec10cd6d-3b96-4c6a-acea-af517d302163" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: connect: connection refused" Feb 27 19:16:40 crc kubenswrapper[4981]: I0227 19:16:40.646963 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:16:41 crc kubenswrapper[4981]: E0227 19:16:41.595081 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Feb 27 19:16:41 crc kubenswrapper[4981]: E0227 19:16:41.595646 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf5h687h566hfh684h66hd9h74h55fh697hd9hc5h597h57dh698h5bbhb4h5c7hdbhfdh555h66bh6h5cdh54dh5fh685h59h77h56dh5h67dq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvhc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(6047b4ff-4778-43fd-8d8e-c84b76ff271e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:16:41 crc kubenswrapper[4981]: E0227 19:16:41.597240 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="6047b4ff-4778-43fd-8d8e-c84b76ff271e" Feb 27 19:16:42 crc kubenswrapper[4981]: E0227 19:16:42.724128 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="6047b4ff-4778-43fd-8d8e-c84b76ff271e" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.240769 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.294495 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.388407 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-dns-swift-storage-0\") pod \"ec10cd6d-3b96-4c6a-acea-af517d302163\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.388896 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfbt8\" (UniqueName: \"kubernetes.io/projected/ec10cd6d-3b96-4c6a-acea-af517d302163-kube-api-access-dfbt8\") pod \"ec10cd6d-3b96-4c6a-acea-af517d302163\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.388949 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-ovsdbserver-sb\") pod \"ec10cd6d-3b96-4c6a-acea-af517d302163\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.389032 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-ovsdbserver-nb\") pod \"ec10cd6d-3b96-4c6a-acea-af517d302163\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.389138 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-config\") pod \"ec10cd6d-3b96-4c6a-acea-af517d302163\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.389175 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-dns-svc\") pod \"ec10cd6d-3b96-4c6a-acea-af517d302163\" (UID: \"ec10cd6d-3b96-4c6a-acea-af517d302163\") " Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.445279 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec10cd6d-3b96-4c6a-acea-af517d302163-kube-api-access-dfbt8" (OuterVolumeSpecName: "kube-api-access-dfbt8") pod "ec10cd6d-3b96-4c6a-acea-af517d302163" (UID: "ec10cd6d-3b96-4c6a-acea-af517d302163"). InnerVolumeSpecName "kube-api-access-dfbt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.471216 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec10cd6d-3b96-4c6a-acea-af517d302163" (UID: "ec10cd6d-3b96-4c6a-acea-af517d302163"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.515288 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e652cdc7-a577-4e73-99da-01c7fc2c45f9-run-httpd\") pod \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.515356 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-sg-core-conf-yaml\") pod \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.515409 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-config-data\") pod \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.515445 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89hpt\" (UniqueName: \"kubernetes.io/projected/e652cdc7-a577-4e73-99da-01c7fc2c45f9-kube-api-access-89hpt\") pod \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.515465 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-combined-ca-bundle\") pod \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.515486 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-scripts\") pod \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.515541 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e652cdc7-a577-4e73-99da-01c7fc2c45f9-log-httpd\") pod \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\" (UID: \"e652cdc7-a577-4e73-99da-01c7fc2c45f9\") " Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.515847 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfbt8\" (UniqueName: \"kubernetes.io/projected/ec10cd6d-3b96-4c6a-acea-af517d302163-kube-api-access-dfbt8\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.515859 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.529183 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e652cdc7-a577-4e73-99da-01c7fc2c45f9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e652cdc7-a577-4e73-99da-01c7fc2c45f9" (UID: "e652cdc7-a577-4e73-99da-01c7fc2c45f9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.530177 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ec10cd6d-3b96-4c6a-acea-af517d302163" (UID: "ec10cd6d-3b96-4c6a-acea-af517d302163"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.533966 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e652cdc7-a577-4e73-99da-01c7fc2c45f9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e652cdc7-a577-4e73-99da-01c7fc2c45f9" (UID: "e652cdc7-a577-4e73-99da-01c7fc2c45f9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.554921 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e652cdc7-a577-4e73-99da-01c7fc2c45f9-kube-api-access-89hpt" (OuterVolumeSpecName: "kube-api-access-89hpt") pod "e652cdc7-a577-4e73-99da-01c7fc2c45f9" (UID: "e652cdc7-a577-4e73-99da-01c7fc2c45f9"). InnerVolumeSpecName "kube-api-access-89hpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.556332 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-scripts" (OuterVolumeSpecName: "scripts") pod "e652cdc7-a577-4e73-99da-01c7fc2c45f9" (UID: "e652cdc7-a577-4e73-99da-01c7fc2c45f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.569787 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec10cd6d-3b96-4c6a-acea-af517d302163" (UID: "ec10cd6d-3b96-4c6a-acea-af517d302163"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.574257 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-config" (OuterVolumeSpecName: "config") pod "ec10cd6d-3b96-4c6a-acea-af517d302163" (UID: "ec10cd6d-3b96-4c6a-acea-af517d302163"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.590775 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e652cdc7-a577-4e73-99da-01c7fc2c45f9" (UID: "e652cdc7-a577-4e73-99da-01c7fc2c45f9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.605434 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec10cd6d-3b96-4c6a-acea-af517d302163" (UID: "ec10cd6d-3b96-4c6a-acea-af517d302163"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.618130 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.618166 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.618179 4981 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e652cdc7-a577-4e73-99da-01c7fc2c45f9-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.618190 4981 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.618209 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89hpt\" (UniqueName: \"kubernetes.io/projected/e652cdc7-a577-4e73-99da-01c7fc2c45f9-kube-api-access-89hpt\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.618222 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.618235 4981 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.618246 4981 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e652cdc7-a577-4e73-99da-01c7fc2c45f9-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.618254 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec10cd6d-3b96-4c6a-acea-af517d302163-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.696146 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" event={"ID":"ec10cd6d-3b96-4c6a-acea-af517d302163","Type":"ContainerDied","Data":"0b2aae48d7e8353227aae15db0b4e56bb9c561fbbab40040ac977f138d8db7c4"} Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.696213 4981 scope.go:117] "RemoveContainer" containerID="a85cb7ace08e7d452c06b0b251fb3dabcbb9ca5e92634e53fe105efb7f3278ed" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.696374 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-cwmgz" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.701755 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e652cdc7-a577-4e73-99da-01c7fc2c45f9" (UID: "e652cdc7-a577-4e73-99da-01c7fc2c45f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.712797 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-config-data" (OuterVolumeSpecName: "config-data") pod "e652cdc7-a577-4e73-99da-01c7fc2c45f9" (UID: "e652cdc7-a577-4e73-99da-01c7fc2c45f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.712928 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e652cdc7-a577-4e73-99da-01c7fc2c45f9","Type":"ContainerDied","Data":"254e459cf95d19458b35e9b7f771d7b0becf197d5ba97e042fc05b8df94b2b28"} Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.713107 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.717377 4981 scope.go:117] "RemoveContainer" containerID="6e11ba2819f7ea7c3b207980fc4edee4f4e723d4d84c8a8caae9d810a3998606" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.719536 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.719557 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e652cdc7-a577-4e73-99da-01c7fc2c45f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.756946 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-cwmgz"] Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.766312 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-cwmgz"] Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.770563 4981 scope.go:117] "RemoveContainer" containerID="060cc35cfd54b06974b6edfe87c00a5ecd4952b42c0c01437d0e2bce0da0c253" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.778133 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.800125 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.818371 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:16:44 crc kubenswrapper[4981]: E0227 19:16:44.818842 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" containerName="ceilometer-central-agent" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.818862 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" containerName="ceilometer-central-agent" Feb 27 19:16:44 crc kubenswrapper[4981]: E0227 19:16:44.818880 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" containerName="sg-core" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.818938 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" containerName="sg-core" Feb 27 19:16:44 crc kubenswrapper[4981]: E0227 19:16:44.818954 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec10cd6d-3b96-4c6a-acea-af517d302163" containerName="dnsmasq-dns" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.818961 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec10cd6d-3b96-4c6a-acea-af517d302163" containerName="dnsmasq-dns" Feb 27 19:16:44 crc kubenswrapper[4981]: E0227 19:16:44.818972 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" containerName="ceilometer-notification-agent" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.818978 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" containerName="ceilometer-notification-agent" Feb 27 19:16:44 crc kubenswrapper[4981]: E0227 19:16:44.818990 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" containerName="proxy-httpd" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.818996 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" containerName="proxy-httpd" Feb 27 19:16:44 crc kubenswrapper[4981]: E0227 19:16:44.819012 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec10cd6d-3b96-4c6a-acea-af517d302163" containerName="init" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.819018 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec10cd6d-3b96-4c6a-acea-af517d302163" containerName="init" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.819212 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec10cd6d-3b96-4c6a-acea-af517d302163" containerName="dnsmasq-dns" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.819245 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" containerName="ceilometer-central-agent" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.819261 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" containerName="ceilometer-notification-agent" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.819272 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" containerName="sg-core" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.819283 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" containerName="proxy-httpd" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.820892 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.832458 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.841687 4981 scope.go:117] "RemoveContainer" containerID="854bb8d7ddb95bc9d55fff3d29b6f314411516fe37426da7367f570ec25b783f" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.842016 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.842260 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.886298 4981 scope.go:117] "RemoveContainer" containerID="eab0c9776977ec234e7403e6cd9f94b0ef6ead48601bb5a63aaa5a6bdb248954" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.905116 4981 scope.go:117] "RemoveContainer" containerID="5fab91688329874bd31d1532c5de0a3d1f511968acfffc988f1258be6405b7f5" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.923195 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf0ef64-e29f-4945-a2df-e1adb0477806-run-httpd\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.923314 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.923352 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7j5g\" (UniqueName: \"kubernetes.io/projected/cbf0ef64-e29f-4945-a2df-e1adb0477806-kube-api-access-p7j5g\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.923385 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-config-data\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.923447 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.923481 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-scripts\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.923514 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf0ef64-e29f-4945-a2df-e1adb0477806-log-httpd\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:44 crc kubenswrapper[4981]: I0227 19:16:44.973521 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.033095 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-scripts\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.033165 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf0ef64-e29f-4945-a2df-e1adb0477806-log-httpd\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.033252 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf0ef64-e29f-4945-a2df-e1adb0477806-run-httpd\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.033319 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.033458 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7j5g\" (UniqueName: \"kubernetes.io/projected/cbf0ef64-e29f-4945-a2df-e1adb0477806-kube-api-access-p7j5g\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.033631 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-config-data\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.033686 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.034717 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf0ef64-e29f-4945-a2df-e1adb0477806-run-httpd\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.037354 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf0ef64-e29f-4945-a2df-e1adb0477806-log-httpd\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.039003 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.043041 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-scripts\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.043355 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.047001 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-config-data\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.053978 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7j5g\" (UniqueName: \"kubernetes.io/projected/cbf0ef64-e29f-4945-a2df-e1adb0477806-kube-api-access-p7j5g\") pod \"ceilometer-0\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " pod="openstack/ceilometer-0" Feb 27 19:16:45 crc kubenswrapper[4981]: E0227 19:16:45.058104 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified" Feb 27 19:16:45 crc kubenswrapper[4981]: E0227 19:16:45.058263 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-keystone-listener-log,Image:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,Command:[/usr/bin/dumb-init],Args:[--single-child -- /usr/bin/tail -n+1 -F /var/log/barbican/barbican-keystone-listener.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbbh5b6hd5h5d6h599h666h55fh684h77h596h68ch676hb8h55h594h98h5bfhb8h697h5dfh58bhb8h56dhd7hfh85h84h597h5d4h55fh5c9hd5q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/barbican,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b45ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-keystone-listener-69d4bd5f7d-zs8qm_openstack(392b1bc3-d461-4cc5-8d63-64922c6c3d04): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:16:45 crc kubenswrapper[4981]: E0227 19:16:45.061155 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"barbican-keystone-listener-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"barbican-keystone-listener\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified\\\"\"]" pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" podUID="392b1bc3-d461-4cc5-8d63-64922c6c3d04" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.090664 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 19:16:45 crc kubenswrapper[4981]: W0227 19:16:45.096573 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2209696a_9590_49d4_b70f_3c86d1cc62f2.slice/crio-f1ab996547dd15252ea3ce7f6d86a9fb2d39c1a4318e5a2f88f27eb49b4f4e37 WatchSource:0}: Error finding container f1ab996547dd15252ea3ce7f6d86a9fb2d39c1a4318e5a2f88f27eb49b4f4e37: Status 404 returned error can't find the container with id f1ab996547dd15252ea3ce7f6d86a9fb2d39c1a4318e5a2f88f27eb49b4f4e37 Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.158615 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.196297 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bf4c8dd6c-shmsm"] Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.214406 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76f488968b-rp6r2"] Feb 27 19:16:45 crc kubenswrapper[4981]: W0227 19:16:45.214471 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda912cdfa_b0ce_4ed4_909d_9d1af2a5a879.slice/crio-712dd143263e22e384e1bebc6b75baaf7b100e000b17d43340799d46723dda22 WatchSource:0}: Error finding container 712dd143263e22e384e1bebc6b75baaf7b100e000b17d43340799d46723dda22: Status 404 returned error can't find the container with id 712dd143263e22e384e1bebc6b75baaf7b100e000b17d43340799d46723dda22 Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.646453 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e652cdc7-a577-4e73-99da-01c7fc2c45f9" path="/var/lib/kubelet/pods/e652cdc7-a577-4e73-99da-01c7fc2c45f9/volumes" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.648823 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec10cd6d-3b96-4c6a-acea-af517d302163" path="/var/lib/kubelet/pods/ec10cd6d-3b96-4c6a-acea-af517d302163/volumes" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.712992 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.740287 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" event={"ID":"2d4e8b93-9791-4889-8e5f-5a3938235441","Type":"ContainerStarted","Data":"e177c1f54320132adf2f25e3537b4d766d0dc4e2c054c2bce5b8b4aea7789a37"} Feb 27 19:16:45 crc kubenswrapper[4981]: W0227 19:16:45.756505 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbf0ef64_e29f_4945_a2df_e1adb0477806.slice/crio-1b9eef2ff64381867bbd0fad8ea69edd4f3a08be73f687bf990b1275e5f979ff WatchSource:0}: Error finding container 1b9eef2ff64381867bbd0fad8ea69edd4f3a08be73f687bf990b1275e5f979ff: Status 404 returned error can't find the container with id 1b9eef2ff64381867bbd0fad8ea69edd4f3a08be73f687bf990b1275e5f979ff Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.766662 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-649cdc5f7c-t45d9" event={"ID":"0b5819ab-18f7-4885-a4b9-a6a3401903a1","Type":"ContainerStarted","Data":"e2aebd1ae7faac4c52f84543c2d526e5fb6068d8aa4b2db54d588a78b114286f"} Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.776733 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76f488968b-rp6r2" event={"ID":"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879","Type":"ContainerStarted","Data":"2bfade5e02f0643fd1bf6ce8f381f0eeac25e9090ba2aefbc0b89a2a24773551"} Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.776792 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76f488968b-rp6r2" event={"ID":"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879","Type":"ContainerStarted","Data":"712dd143263e22e384e1bebc6b75baaf7b100e000b17d43340799d46723dda22"} Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.777940 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a52500fe-ee47-45d3-bd24-7a55e1b58b9b","Type":"ContainerStarted","Data":"04ef3843c4bc9b45cca6ba7bdeb2916aa9b56950c64a69c5b4c309606b14c06f"} Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.781261 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2209696a-9590-49d4-b70f-3c86d1cc62f2","Type":"ContainerStarted","Data":"f1ab996547dd15252ea3ce7f6d86a9fb2d39c1a4318e5a2f88f27eb49b4f4e37"} Feb 27 19:16:45 crc kubenswrapper[4981]: E0227 19:16:45.783781 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"barbican-keystone-listener-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified\\\"\", failed to \"StartContainer\" for \"barbican-keystone-listener\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified\\\"\"]" pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" podUID="392b1bc3-d461-4cc5-8d63-64922c6c3d04" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.881868 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kp7th"] Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.910067 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kp7th" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.913968 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kp7th"] Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.955580 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhnxf\" (UniqueName: \"kubernetes.io/projected/ecdc9fdc-fd28-4bdc-8403-934067c2ec3d-kube-api-access-dhnxf\") pod \"redhat-operators-kp7th\" (UID: \"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d\") " pod="openshift-marketplace/redhat-operators-kp7th" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.955630 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecdc9fdc-fd28-4bdc-8403-934067c2ec3d-catalog-content\") pod \"redhat-operators-kp7th\" (UID: \"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d\") " pod="openshift-marketplace/redhat-operators-kp7th" Feb 27 19:16:45 crc kubenswrapper[4981]: I0227 19:16:45.955684 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecdc9fdc-fd28-4bdc-8403-934067c2ec3d-utilities\") pod \"redhat-operators-kp7th\" (UID: \"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d\") " pod="openshift-marketplace/redhat-operators-kp7th" Feb 27 19:16:46 crc kubenswrapper[4981]: I0227 19:16:46.056968 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhnxf\" (UniqueName: \"kubernetes.io/projected/ecdc9fdc-fd28-4bdc-8403-934067c2ec3d-kube-api-access-dhnxf\") pod \"redhat-operators-kp7th\" (UID: \"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d\") " pod="openshift-marketplace/redhat-operators-kp7th" Feb 27 19:16:46 crc kubenswrapper[4981]: I0227 19:16:46.057015 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecdc9fdc-fd28-4bdc-8403-934067c2ec3d-catalog-content\") pod \"redhat-operators-kp7th\" (UID: \"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d\") " pod="openshift-marketplace/redhat-operators-kp7th" Feb 27 19:16:46 crc kubenswrapper[4981]: I0227 19:16:46.057067 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecdc9fdc-fd28-4bdc-8403-934067c2ec3d-utilities\") pod \"redhat-operators-kp7th\" (UID: \"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d\") " pod="openshift-marketplace/redhat-operators-kp7th" Feb 27 19:16:46 crc kubenswrapper[4981]: I0227 19:16:46.057725 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecdc9fdc-fd28-4bdc-8403-934067c2ec3d-utilities\") pod \"redhat-operators-kp7th\" (UID: \"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d\") " pod="openshift-marketplace/redhat-operators-kp7th" Feb 27 19:16:46 crc kubenswrapper[4981]: I0227 19:16:46.057948 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecdc9fdc-fd28-4bdc-8403-934067c2ec3d-catalog-content\") pod \"redhat-operators-kp7th\" (UID: \"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d\") " pod="openshift-marketplace/redhat-operators-kp7th" Feb 27 19:16:46 crc kubenswrapper[4981]: I0227 19:16:46.076396 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhnxf\" (UniqueName: \"kubernetes.io/projected/ecdc9fdc-fd28-4bdc-8403-934067c2ec3d-kube-api-access-dhnxf\") pod \"redhat-operators-kp7th\" (UID: \"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d\") " pod="openshift-marketplace/redhat-operators-kp7th" Feb 27 19:16:46 crc kubenswrapper[4981]: I0227 19:16:46.239143 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kp7th" Feb 27 19:16:46 crc kubenswrapper[4981]: I0227 19:16:46.786365 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kp7th"] Feb 27 19:16:46 crc kubenswrapper[4981]: I0227 19:16:46.804122 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-649cdc5f7c-t45d9" event={"ID":"0b5819ab-18f7-4885-a4b9-a6a3401903a1","Type":"ContainerStarted","Data":"ca32bbbc043bf5c3d443cd95726f4e60c54e238c98114190773e4f7cf04378fe"} Feb 27 19:16:46 crc kubenswrapper[4981]: I0227 19:16:46.806221 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a52500fe-ee47-45d3-bd24-7a55e1b58b9b","Type":"ContainerStarted","Data":"072e13c1a4ee3c1a730022438bc9a1b500fd950f4304ea9e4f80c016ab8a03a4"} Feb 27 19:16:46 crc kubenswrapper[4981]: I0227 19:16:46.812189 4981 generic.go:334] "Generic (PLEG): container finished" podID="2d4e8b93-9791-4889-8e5f-5a3938235441" containerID="b75fa8c30fe9ac22cea8e277b3c27abc3ec446c1219488c3feaebdf7810f7611" exitCode=0 Feb 27 19:16:46 crc kubenswrapper[4981]: I0227 19:16:46.812275 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" event={"ID":"2d4e8b93-9791-4889-8e5f-5a3938235441","Type":"ContainerDied","Data":"b75fa8c30fe9ac22cea8e277b3c27abc3ec446c1219488c3feaebdf7810f7611"} Feb 27 19:16:46 crc kubenswrapper[4981]: I0227 19:16:46.814794 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf0ef64-e29f-4945-a2df-e1adb0477806","Type":"ContainerStarted","Data":"1b9eef2ff64381867bbd0fad8ea69edd4f3a08be73f687bf990b1275e5f979ff"} Feb 27 19:16:47 crc kubenswrapper[4981]: I0227 19:16:47.823696 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kp7th" event={"ID":"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d","Type":"ContainerStarted","Data":"f64edeb57065e6b527af77dd076ea91e224b207dfe6bca0498cefbf6ce296f8e"} Feb 27 19:16:47 crc kubenswrapper[4981]: I0227 19:16:47.825639 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kp7th" event={"ID":"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d","Type":"ContainerStarted","Data":"3c7a5588a97324e36beb21d73c9b4604d54cf9d2423fb6a7565b2023d8fd06d9"} Feb 27 19:16:47 crc kubenswrapper[4981]: I0227 19:16:47.827439 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76f488968b-rp6r2" event={"ID":"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879","Type":"ContainerStarted","Data":"7a49980a63c473dcc320eddd1d497f4ed0bc0d50087c74c6dd29e6e12d599a2d"} Feb 27 19:16:47 crc kubenswrapper[4981]: I0227 19:16:47.827523 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:47 crc kubenswrapper[4981]: I0227 19:16:47.827591 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:47 crc kubenswrapper[4981]: I0227 19:16:47.867580 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-649cdc5f7c-t45d9" podStartSLOduration=5.329206509 podStartE2EDuration="29.867564115s" podCreationTimestamp="2026-02-27 19:16:18 +0000 UTC" firstStartedPulling="2026-02-27 19:16:19.55162032 +0000 UTC m=+1879.030401480" lastFinishedPulling="2026-02-27 19:16:44.089977926 +0000 UTC m=+1903.568759086" observedRunningTime="2026-02-27 19:16:47.864454972 +0000 UTC m=+1907.343236132" watchObservedRunningTime="2026-02-27 19:16:47.867564115 +0000 UTC m=+1907.346345275" Feb 27 19:16:47 crc kubenswrapper[4981]: I0227 19:16:47.888895 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-76f488968b-rp6r2" podStartSLOduration=26.888855541 podStartE2EDuration="26.888855541s" podCreationTimestamp="2026-02-27 19:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:16:47.881615124 +0000 UTC m=+1907.360396274" watchObservedRunningTime="2026-02-27 19:16:47.888855541 +0000 UTC m=+1907.367636701" Feb 27 19:16:48 crc kubenswrapper[4981]: I0227 19:16:48.836900 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" event={"ID":"2d4e8b93-9791-4889-8e5f-5a3938235441","Type":"ContainerStarted","Data":"10e9733bd1a84beb0e9d3bdac8211f223dcef3ed3d8833b09543cc31cc3e56eb"} Feb 27 19:16:48 crc kubenswrapper[4981]: I0227 19:16:48.839227 4981 generic.go:334] "Generic (PLEG): container finished" podID="ecdc9fdc-fd28-4bdc-8403-934067c2ec3d" containerID="f64edeb57065e6b527af77dd076ea91e224b207dfe6bca0498cefbf6ce296f8e" exitCode=0 Feb 27 19:16:48 crc kubenswrapper[4981]: I0227 19:16:48.839325 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kp7th" event={"ID":"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d","Type":"ContainerDied","Data":"f64edeb57065e6b527af77dd076ea91e224b207dfe6bca0498cefbf6ce296f8e"} Feb 27 19:16:49 crc kubenswrapper[4981]: I0227 19:16:49.852549 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a52500fe-ee47-45d3-bd24-7a55e1b58b9b","Type":"ContainerStarted","Data":"7d9fa9b2f09fb4a8b54c8e4d5fba6ebcd876c44084aa26c6389c03a63c988f71"} Feb 27 19:16:49 crc kubenswrapper[4981]: I0227 19:16:49.852823 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a52500fe-ee47-45d3-bd24-7a55e1b58b9b" containerName="cinder-api-log" containerID="cri-o://072e13c1a4ee3c1a730022438bc9a1b500fd950f4304ea9e4f80c016ab8a03a4" gracePeriod=30 Feb 27 19:16:49 crc kubenswrapper[4981]: I0227 19:16:49.852919 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a52500fe-ee47-45d3-bd24-7a55e1b58b9b" containerName="cinder-api" containerID="cri-o://7d9fa9b2f09fb4a8b54c8e4d5fba6ebcd876c44084aa26c6389c03a63c988f71" gracePeriod=30 Feb 27 19:16:49 crc kubenswrapper[4981]: I0227 19:16:49.892481 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" podStartSLOduration=20.892459815 podStartE2EDuration="20.892459815s" podCreationTimestamp="2026-02-27 19:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:16:49.887522808 +0000 UTC m=+1909.366303968" watchObservedRunningTime="2026-02-27 19:16:49.892459815 +0000 UTC m=+1909.371240975" Feb 27 19:16:49 crc kubenswrapper[4981]: I0227 19:16:49.913524 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=20.913503203 podStartE2EDuration="20.913503203s" podCreationTimestamp="2026-02-27 19:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:16:49.912494853 +0000 UTC m=+1909.391276013" watchObservedRunningTime="2026-02-27 19:16:49.913503203 +0000 UTC m=+1909.392284363" Feb 27 19:16:50 crc kubenswrapper[4981]: I0227 19:16:50.863311 4981 generic.go:334] "Generic (PLEG): container finished" podID="a52500fe-ee47-45d3-bd24-7a55e1b58b9b" containerID="7d9fa9b2f09fb4a8b54c8e4d5fba6ebcd876c44084aa26c6389c03a63c988f71" exitCode=0 Feb 27 19:16:50 crc kubenswrapper[4981]: I0227 19:16:50.863698 4981 generic.go:334] "Generic (PLEG): container finished" podID="a52500fe-ee47-45d3-bd24-7a55e1b58b9b" containerID="072e13c1a4ee3c1a730022438bc9a1b500fd950f4304ea9e4f80c016ab8a03a4" exitCode=143 Feb 27 19:16:50 crc kubenswrapper[4981]: I0227 19:16:50.863383 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a52500fe-ee47-45d3-bd24-7a55e1b58b9b","Type":"ContainerDied","Data":"7d9fa9b2f09fb4a8b54c8e4d5fba6ebcd876c44084aa26c6389c03a63c988f71"} Feb 27 19:16:50 crc kubenswrapper[4981]: I0227 19:16:50.863731 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a52500fe-ee47-45d3-bd24-7a55e1b58b9b","Type":"ContainerDied","Data":"072e13c1a4ee3c1a730022438bc9a1b500fd950f4304ea9e4f80c016ab8a03a4"} Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.220884 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.361797 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-etc-machine-id\") pod \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.362339 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-config-data-custom\") pod \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.362428 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-logs\") pod \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.362476 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-config-data\") pod \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.362507 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnwhh\" (UniqueName: \"kubernetes.io/projected/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-kube-api-access-fnwhh\") pod \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.362541 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-scripts\") pod \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.362622 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-combined-ca-bundle\") pod \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\" (UID: \"a52500fe-ee47-45d3-bd24-7a55e1b58b9b\") " Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.361994 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a52500fe-ee47-45d3-bd24-7a55e1b58b9b" (UID: "a52500fe-ee47-45d3-bd24-7a55e1b58b9b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.363354 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-logs" (OuterVolumeSpecName: "logs") pod "a52500fe-ee47-45d3-bd24-7a55e1b58b9b" (UID: "a52500fe-ee47-45d3-bd24-7a55e1b58b9b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.391211 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-kube-api-access-fnwhh" (OuterVolumeSpecName: "kube-api-access-fnwhh") pod "a52500fe-ee47-45d3-bd24-7a55e1b58b9b" (UID: "a52500fe-ee47-45d3-bd24-7a55e1b58b9b"). InnerVolumeSpecName "kube-api-access-fnwhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.400364 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-scripts" (OuterVolumeSpecName: "scripts") pod "a52500fe-ee47-45d3-bd24-7a55e1b58b9b" (UID: "a52500fe-ee47-45d3-bd24-7a55e1b58b9b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.408866 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a52500fe-ee47-45d3-bd24-7a55e1b58b9b" (UID: "a52500fe-ee47-45d3-bd24-7a55e1b58b9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.412471 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a52500fe-ee47-45d3-bd24-7a55e1b58b9b" (UID: "a52500fe-ee47-45d3-bd24-7a55e1b58b9b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.433402 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-config-data" (OuterVolumeSpecName: "config-data") pod "a52500fe-ee47-45d3-bd24-7a55e1b58b9b" (UID: "a52500fe-ee47-45d3-bd24-7a55e1b58b9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.465330 4981 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.465364 4981 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.465376 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.465391 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.465403 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnwhh\" (UniqueName: \"kubernetes.io/projected/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-kube-api-access-fnwhh\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.465415 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.465423 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a52500fe-ee47-45d3-bd24-7a55e1b58b9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.537287 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.582775 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.796743 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.882360 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a52500fe-ee47-45d3-bd24-7a55e1b58b9b","Type":"ContainerDied","Data":"04ef3843c4bc9b45cca6ba7bdeb2916aa9b56950c64a69c5b4c309606b14c06f"} Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.882433 4981 scope.go:117] "RemoveContainer" containerID="7d9fa9b2f09fb4a8b54c8e4d5fba6ebcd876c44084aa26c6389c03a63c988f71" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.882387 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.918619 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.931635 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.957733 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 27 19:16:51 crc kubenswrapper[4981]: E0227 19:16:51.958433 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52500fe-ee47-45d3-bd24-7a55e1b58b9b" containerName="cinder-api-log" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.958461 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52500fe-ee47-45d3-bd24-7a55e1b58b9b" containerName="cinder-api-log" Feb 27 19:16:51 crc kubenswrapper[4981]: E0227 19:16:51.958497 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52500fe-ee47-45d3-bd24-7a55e1b58b9b" containerName="cinder-api" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.958508 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52500fe-ee47-45d3-bd24-7a55e1b58b9b" containerName="cinder-api" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.958745 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="a52500fe-ee47-45d3-bd24-7a55e1b58b9b" containerName="cinder-api" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.958795 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="a52500fe-ee47-45d3-bd24-7a55e1b58b9b" containerName="cinder-api-log" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.962183 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.974235 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.974396 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 27 19:16:51 crc kubenswrapper[4981]: I0227 19:16:51.974255 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.025678 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.082314 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-config-data\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.082392 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.082417 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.082541 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8wpz\" (UniqueName: \"kubernetes.io/projected/48fdca2c-4513-4ee6-ad1b-bf69891f5580-kube-api-access-r8wpz\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.082571 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48fdca2c-4513-4ee6-ad1b-bf69891f5580-logs\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.082594 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/48fdca2c-4513-4ee6-ad1b-bf69891f5580-etc-machine-id\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.082670 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-config-data-custom\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.082695 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-scripts\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.082723 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-public-tls-certs\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.184070 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8wpz\" (UniqueName: \"kubernetes.io/projected/48fdca2c-4513-4ee6-ad1b-bf69891f5580-kube-api-access-r8wpz\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.184135 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48fdca2c-4513-4ee6-ad1b-bf69891f5580-logs\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.184161 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/48fdca2c-4513-4ee6-ad1b-bf69891f5580-etc-machine-id\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.184216 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-config-data-custom\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.184244 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-scripts\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.184270 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-public-tls-certs\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.184333 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-config-data\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.184362 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.184385 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.186658 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48fdca2c-4513-4ee6-ad1b-bf69891f5580-logs\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.187129 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/48fdca2c-4513-4ee6-ad1b-bf69891f5580-etc-machine-id\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.194163 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-config-data\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.197720 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.197757 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-scripts\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.198567 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-public-tls-certs\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.199222 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.205938 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-config-data-custom\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.217793 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8wpz\" (UniqueName: \"kubernetes.io/projected/48fdca2c-4513-4ee6-ad1b-bf69891f5580-kube-api-access-r8wpz\") pod \"cinder-api-0\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.334617 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 19:16:52 crc kubenswrapper[4981]: I0227 19:16:52.628811 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:16:52 crc kubenswrapper[4981]: E0227 19:16:52.629060 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:16:53 crc kubenswrapper[4981]: I0227 19:16:53.644716 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52500fe-ee47-45d3-bd24-7a55e1b58b9b" path="/var/lib/kubelet/pods/a52500fe-ee47-45d3-bd24-7a55e1b58b9b/volumes" Feb 27 19:16:53 crc kubenswrapper[4981]: I0227 19:16:53.818260 4981 scope.go:117] "RemoveContainer" containerID="072e13c1a4ee3c1a730022438bc9a1b500fd950f4304ea9e4f80c016ab8a03a4" Feb 27 19:16:54 crc kubenswrapper[4981]: I0227 19:16:54.339093 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:54 crc kubenswrapper[4981]: I0227 19:16:54.389617 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:16:54 crc kubenswrapper[4981]: I0227 19:16:54.465598 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-db6dc8fcb-c5pxk"] Feb 27 19:16:54 crc kubenswrapper[4981]: I0227 19:16:54.465924 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-db6dc8fcb-c5pxk" podUID="c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9" containerName="barbican-api-log" containerID="cri-o://8d637a38fd3283377e57fd4aee97428ac1a3acee6bddb828574c303a3880a1ae" gracePeriod=30 Feb 27 19:16:54 crc kubenswrapper[4981]: I0227 19:16:54.466100 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-db6dc8fcb-c5pxk" podUID="c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9" containerName="barbican-api" containerID="cri-o://2fbbc4dd27432812fea4b3b90085f81535a6ac71bf8986218925a46e0d890326" gracePeriod=30 Feb 27 19:16:54 crc kubenswrapper[4981]: I0227 19:16:54.637268 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:54 crc kubenswrapper[4981]: I0227 19:16:54.638421 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:16:54 crc kubenswrapper[4981]: I0227 19:16:54.720399 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58957f86ff-sn4jh"] Feb 27 19:16:54 crc kubenswrapper[4981]: I0227 19:16:54.720691 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" podUID="6c81e863-b29e-405f-b9e5-9979b695bcd2" containerName="dnsmasq-dns" containerID="cri-o://aafa0bd1676bd36b8c5a721d7ee7f7e612b5640806af1569cadb7e4304fa1804" gracePeriod=10 Feb 27 19:16:54 crc kubenswrapper[4981]: I0227 19:16:54.931210 4981 generic.go:334] "Generic (PLEG): container finished" podID="6c81e863-b29e-405f-b9e5-9979b695bcd2" containerID="aafa0bd1676bd36b8c5a721d7ee7f7e612b5640806af1569cadb7e4304fa1804" exitCode=0 Feb 27 19:16:54 crc kubenswrapper[4981]: I0227 19:16:54.931528 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" event={"ID":"6c81e863-b29e-405f-b9e5-9979b695bcd2","Type":"ContainerDied","Data":"aafa0bd1676bd36b8c5a721d7ee7f7e612b5640806af1569cadb7e4304fa1804"} Feb 27 19:16:54 crc kubenswrapper[4981]: I0227 19:16:54.940629 4981 generic.go:334] "Generic (PLEG): container finished" podID="c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9" containerID="8d637a38fd3283377e57fd4aee97428ac1a3acee6bddb828574c303a3880a1ae" exitCode=143 Feb 27 19:16:54 crc kubenswrapper[4981]: I0227 19:16:54.940711 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-db6dc8fcb-c5pxk" event={"ID":"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9","Type":"ContainerDied","Data":"8d637a38fd3283377e57fd4aee97428ac1a3acee6bddb828574c303a3880a1ae"} Feb 27 19:16:55 crc kubenswrapper[4981]: I0227 19:16:55.550529 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:55 crc kubenswrapper[4981]: I0227 19:16:55.660652 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bv85\" (UniqueName: \"kubernetes.io/projected/6c81e863-b29e-405f-b9e5-9979b695bcd2-kube-api-access-8bv85\") pod \"6c81e863-b29e-405f-b9e5-9979b695bcd2\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " Feb 27 19:16:55 crc kubenswrapper[4981]: I0227 19:16:55.661137 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-dns-svc\") pod \"6c81e863-b29e-405f-b9e5-9979b695bcd2\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " Feb 27 19:16:55 crc kubenswrapper[4981]: I0227 19:16:55.661183 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-ovsdbserver-sb\") pod \"6c81e863-b29e-405f-b9e5-9979b695bcd2\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " Feb 27 19:16:55 crc kubenswrapper[4981]: I0227 19:16:55.661255 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-ovsdbserver-nb\") pod \"6c81e863-b29e-405f-b9e5-9979b695bcd2\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " Feb 27 19:16:55 crc kubenswrapper[4981]: I0227 19:16:55.661368 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-dns-swift-storage-0\") pod \"6c81e863-b29e-405f-b9e5-9979b695bcd2\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " Feb 27 19:16:55 crc kubenswrapper[4981]: I0227 19:16:55.661421 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-config\") pod \"6c81e863-b29e-405f-b9e5-9979b695bcd2\" (UID: \"6c81e863-b29e-405f-b9e5-9979b695bcd2\") " Feb 27 19:16:55 crc kubenswrapper[4981]: I0227 19:16:55.696280 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c81e863-b29e-405f-b9e5-9979b695bcd2-kube-api-access-8bv85" (OuterVolumeSpecName: "kube-api-access-8bv85") pod "6c81e863-b29e-405f-b9e5-9979b695bcd2" (UID: "6c81e863-b29e-405f-b9e5-9979b695bcd2"). InnerVolumeSpecName "kube-api-access-8bv85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:16:55 crc kubenswrapper[4981]: I0227 19:16:55.766609 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bv85\" (UniqueName: \"kubernetes.io/projected/6c81e863-b29e-405f-b9e5-9979b695bcd2-kube-api-access-8bv85\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:55 crc kubenswrapper[4981]: I0227 19:16:55.894557 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-config" (OuterVolumeSpecName: "config") pod "6c81e863-b29e-405f-b9e5-9979b695bcd2" (UID: "6c81e863-b29e-405f-b9e5-9979b695bcd2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:16:55 crc kubenswrapper[4981]: I0227 19:16:55.959143 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 19:16:55 crc kubenswrapper[4981]: I0227 19:16:55.963183 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c81e863-b29e-405f-b9e5-9979b695bcd2" (UID: "6c81e863-b29e-405f-b9e5-9979b695bcd2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:16:55 crc kubenswrapper[4981]: I0227 19:16:55.996239 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:55 crc kubenswrapper[4981]: I0227 19:16:55.996307 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:56 crc kubenswrapper[4981]: I0227 19:16:56.050417 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6c81e863-b29e-405f-b9e5-9979b695bcd2" (UID: "6c81e863-b29e-405f-b9e5-9979b695bcd2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:16:56 crc kubenswrapper[4981]: I0227 19:16:56.064821 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6c81e863-b29e-405f-b9e5-9979b695bcd2" (UID: "6c81e863-b29e-405f-b9e5-9979b695bcd2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:16:56 crc kubenswrapper[4981]: I0227 19:16:56.066865 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6c81e863-b29e-405f-b9e5-9979b695bcd2" (UID: "6c81e863-b29e-405f-b9e5-9979b695bcd2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:16:56 crc kubenswrapper[4981]: I0227 19:16:56.072029 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" event={"ID":"6c81e863-b29e-405f-b9e5-9979b695bcd2","Type":"ContainerDied","Data":"884822a58fb862b78d69da125faa4f520977583af373a2cf0f14cf8e3a887cbd"} Feb 27 19:16:56 crc kubenswrapper[4981]: I0227 19:16:56.072113 4981 scope.go:117] "RemoveContainer" containerID="aafa0bd1676bd36b8c5a721d7ee7f7e612b5640806af1569cadb7e4304fa1804" Feb 27 19:16:56 crc kubenswrapper[4981]: I0227 19:16:56.072318 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58957f86ff-sn4jh" Feb 27 19:16:56 crc kubenswrapper[4981]: I0227 19:16:56.102973 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:56 crc kubenswrapper[4981]: I0227 19:16:56.103372 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:56 crc kubenswrapper[4981]: I0227 19:16:56.103385 4981 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c81e863-b29e-405f-b9e5-9979b695bcd2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:56 crc kubenswrapper[4981]: I0227 19:16:56.115151 4981 scope.go:117] "RemoveContainer" containerID="64c38a5cec993faf734fe7777709d31dde9058b94675cc0df3f02ae82b4071d8" Feb 27 19:16:56 crc kubenswrapper[4981]: I0227 19:16:56.153734 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58957f86ff-sn4jh"] Feb 27 19:16:56 crc kubenswrapper[4981]: I0227 19:16:56.163663 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58957f86ff-sn4jh"] Feb 27 19:16:57 crc kubenswrapper[4981]: I0227 19:16:57.097023 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2209696a-9590-49d4-b70f-3c86d1cc62f2","Type":"ContainerStarted","Data":"133e7fa1c7d8e7886810e3e58c2277588cf4fea88ff4893dc670ee892e7c6ab7"} Feb 27 19:16:57 crc kubenswrapper[4981]: I0227 19:16:57.126290 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf0ef64-e29f-4945-a2df-e1adb0477806","Type":"ContainerStarted","Data":"ce6d2237386c71fe58768d5fc41ce360d9699bde69ba7ed831cda2f824103e2f"} Feb 27 19:16:57 crc kubenswrapper[4981]: I0227 19:16:57.129943 4981 generic.go:334] "Generic (PLEG): container finished" podID="ecdc9fdc-fd28-4bdc-8403-934067c2ec3d" containerID="b1f584f37e5aeb76a075432824101373c92311609d3c6df5917789ce20af5508" exitCode=0 Feb 27 19:16:57 crc kubenswrapper[4981]: I0227 19:16:57.130004 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kp7th" event={"ID":"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d","Type":"ContainerDied","Data":"b1f584f37e5aeb76a075432824101373c92311609d3c6df5917789ce20af5508"} Feb 27 19:16:57 crc kubenswrapper[4981]: I0227 19:16:57.134182 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"48fdca2c-4513-4ee6-ad1b-bf69891f5580","Type":"ContainerStarted","Data":"1cbf1ce682a3eeca1567599c6ff529e2db2d20a4965a962ce5c563a3e4dd58f1"} Feb 27 19:16:57 crc kubenswrapper[4981]: I0227 19:16:57.134227 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"48fdca2c-4513-4ee6-ad1b-bf69891f5580","Type":"ContainerStarted","Data":"3456dc5752607b5ff4b87c9e8223dcd60d2e5e49dfb93728027024c08bf6c2af"} Feb 27 19:16:57 crc kubenswrapper[4981]: I0227 19:16:57.648499 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c81e863-b29e-405f-b9e5-9979b695bcd2" path="/var/lib/kubelet/pods/6c81e863-b29e-405f-b9e5-9979b695bcd2/volumes" Feb 27 19:16:58 crc kubenswrapper[4981]: I0227 19:16:58.146511 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2209696a-9590-49d4-b70f-3c86d1cc62f2","Type":"ContainerStarted","Data":"ce4d7227144ca8584a4d9256a7aa932093994a6e13b790a0f4a256dbf86d8129"} Feb 27 19:16:58 crc kubenswrapper[4981]: I0227 19:16:58.154199 4981 generic.go:334] "Generic (PLEG): container finished" podID="c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9" containerID="2fbbc4dd27432812fea4b3b90085f81535a6ac71bf8986218925a46e0d890326" exitCode=0 Feb 27 19:16:58 crc kubenswrapper[4981]: I0227 19:16:58.154347 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-db6dc8fcb-c5pxk" event={"ID":"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9","Type":"ContainerDied","Data":"2fbbc4dd27432812fea4b3b90085f81535a6ac71bf8986218925a46e0d890326"} Feb 27 19:16:58 crc kubenswrapper[4981]: I0227 19:16:58.180396 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=20.188856633 podStartE2EDuration="29.180369822s" podCreationTimestamp="2026-02-27 19:16:29 +0000 UTC" firstStartedPulling="2026-02-27 19:16:45.099341679 +0000 UTC m=+1904.578122839" lastFinishedPulling="2026-02-27 19:16:54.090854878 +0000 UTC m=+1913.569636028" observedRunningTime="2026-02-27 19:16:58.1745498 +0000 UTC m=+1917.653330960" watchObservedRunningTime="2026-02-27 19:16:58.180369822 +0000 UTC m=+1917.659150982" Feb 27 19:16:58 crc kubenswrapper[4981]: I0227 19:16:58.754246 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:58 crc kubenswrapper[4981]: I0227 19:16:58.866286 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-combined-ca-bundle\") pod \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\" (UID: \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\") " Feb 27 19:16:58 crc kubenswrapper[4981]: I0227 19:16:58.866512 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr4lk\" (UniqueName: \"kubernetes.io/projected/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-kube-api-access-sr4lk\") pod \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\" (UID: \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\") " Feb 27 19:16:58 crc kubenswrapper[4981]: I0227 19:16:58.866569 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-config-data\") pod \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\" (UID: \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\") " Feb 27 19:16:58 crc kubenswrapper[4981]: I0227 19:16:58.866623 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-logs\") pod \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\" (UID: \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\") " Feb 27 19:16:58 crc kubenswrapper[4981]: I0227 19:16:58.866709 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-config-data-custom\") pod \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\" (UID: \"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9\") " Feb 27 19:16:58 crc kubenswrapper[4981]: I0227 19:16:58.867988 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-logs" (OuterVolumeSpecName: "logs") pod "c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9" (UID: "c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:16:58 crc kubenswrapper[4981]: I0227 19:16:58.878613 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9" (UID: "c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:58 crc kubenswrapper[4981]: I0227 19:16:58.888550 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-kube-api-access-sr4lk" (OuterVolumeSpecName: "kube-api-access-sr4lk") pod "c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9" (UID: "c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9"). InnerVolumeSpecName "kube-api-access-sr4lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:16:58 crc kubenswrapper[4981]: I0227 19:16:58.930362 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9" (UID: "c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:58 crc kubenswrapper[4981]: I0227 19:16:58.938829 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-config-data" (OuterVolumeSpecName: "config-data") pod "c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9" (UID: "c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:16:58 crc kubenswrapper[4981]: I0227 19:16:58.968484 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr4lk\" (UniqueName: \"kubernetes.io/projected/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-kube-api-access-sr4lk\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:58 crc kubenswrapper[4981]: I0227 19:16:58.968523 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:58 crc kubenswrapper[4981]: I0227 19:16:58.968535 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:58 crc kubenswrapper[4981]: I0227 19:16:58.968547 4981 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:58 crc kubenswrapper[4981]: I0227 19:16:58.968557 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:59 crc kubenswrapper[4981]: I0227 19:16:59.164743 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-db6dc8fcb-c5pxk" event={"ID":"c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9","Type":"ContainerDied","Data":"be55e92b6339fa1c9f5fb6e572fadc9ab75cb3f770f39a5444417bd4f28907c4"} Feb 27 19:16:59 crc kubenswrapper[4981]: I0227 19:16:59.164761 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-db6dc8fcb-c5pxk" Feb 27 19:16:59 crc kubenswrapper[4981]: I0227 19:16:59.164827 4981 scope.go:117] "RemoveContainer" containerID="2fbbc4dd27432812fea4b3b90085f81535a6ac71bf8986218925a46e0d890326" Feb 27 19:16:59 crc kubenswrapper[4981]: I0227 19:16:59.169634 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf0ef64-e29f-4945-a2df-e1adb0477806","Type":"ContainerStarted","Data":"6f2ec3ef33889a3eebc60b5f33250f1ece331b74527f03423a47750db361ffc1"} Feb 27 19:16:59 crc kubenswrapper[4981]: I0227 19:16:59.171661 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6047b4ff-4778-43fd-8d8e-c84b76ff271e","Type":"ContainerStarted","Data":"ad16f6eebbf2beec846b93606e3bf6e09e066977ec08c59d3b1d01db8b58e6a8"} Feb 27 19:16:59 crc kubenswrapper[4981]: I0227 19:16:59.175865 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"48fdca2c-4513-4ee6-ad1b-bf69891f5580","Type":"ContainerStarted","Data":"3a24b6b7046d00eaee078203e6b423a21700f864b03fcfa22beb510090d24c3b"} Feb 27 19:16:59 crc kubenswrapper[4981]: I0227 19:16:59.176099 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 27 19:16:59 crc kubenswrapper[4981]: I0227 19:16:59.187529 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.153219474 podStartE2EDuration="45.18750185s" podCreationTimestamp="2026-02-27 19:16:14 +0000 UTC" firstStartedPulling="2026-02-27 19:16:15.231296689 +0000 UTC m=+1874.710077849" lastFinishedPulling="2026-02-27 19:16:58.265579055 +0000 UTC m=+1917.744360225" observedRunningTime="2026-02-27 19:16:59.185602312 +0000 UTC m=+1918.664383472" watchObservedRunningTime="2026-02-27 19:16:59.18750185 +0000 UTC m=+1918.666283010" Feb 27 19:16:59 crc kubenswrapper[4981]: I0227 19:16:59.198486 4981 scope.go:117] "RemoveContainer" containerID="8d637a38fd3283377e57fd4aee97428ac1a3acee6bddb828574c303a3880a1ae" Feb 27 19:16:59 crc kubenswrapper[4981]: I0227 19:16:59.214095 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-db6dc8fcb-c5pxk"] Feb 27 19:16:59 crc kubenswrapper[4981]: I0227 19:16:59.227388 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-db6dc8fcb-c5pxk"] Feb 27 19:16:59 crc kubenswrapper[4981]: I0227 19:16:59.262855 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.262828897 podStartE2EDuration="8.262828897s" podCreationTimestamp="2026-02-27 19:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:16:59.224862694 +0000 UTC m=+1918.703643864" watchObservedRunningTime="2026-02-27 19:16:59.262828897 +0000 UTC m=+1918.741610057" Feb 27 19:16:59 crc kubenswrapper[4981]: I0227 19:16:59.500724 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 27 19:16:59 crc kubenswrapper[4981]: I0227 19:16:59.642479 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9" path="/var/lib/kubelet/pods/c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9/volumes" Feb 27 19:17:00 crc kubenswrapper[4981]: I0227 19:17:00.203598 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kp7th" event={"ID":"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d","Type":"ContainerStarted","Data":"d889a6b00da43b08c5a203968124cd83a2f3b4226be1024b9bb165ee13e5913d"} Feb 27 19:17:00 crc kubenswrapper[4981]: I0227 19:17:00.232821 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kp7th" podStartSLOduration=5.11753725 podStartE2EDuration="15.232799554s" podCreationTimestamp="2026-02-27 19:16:45 +0000 UTC" firstStartedPulling="2026-02-27 19:16:48.841193062 +0000 UTC m=+1908.319974222" lastFinishedPulling="2026-02-27 19:16:58.956455366 +0000 UTC m=+1918.435236526" observedRunningTime="2026-02-27 19:17:00.22428523 +0000 UTC m=+1919.703066400" watchObservedRunningTime="2026-02-27 19:17:00.232799554 +0000 UTC m=+1919.711580714" Feb 27 19:17:01 crc kubenswrapper[4981]: I0227 19:17:01.221252 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" event={"ID":"392b1bc3-d461-4cc5-8d63-64922c6c3d04","Type":"ContainerStarted","Data":"29a09efd35a13f9913a2db4c13af471be771dc66870af5193ce438f581026f26"} Feb 27 19:17:01 crc kubenswrapper[4981]: I0227 19:17:01.221692 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" event={"ID":"392b1bc3-d461-4cc5-8d63-64922c6c3d04","Type":"ContainerStarted","Data":"a80ed4738b33c46d258ade1f4c61824d5ccb8d85f0a684c76db99ee197e25f02"} Feb 27 19:17:01 crc kubenswrapper[4981]: I0227 19:17:01.224674 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf0ef64-e29f-4945-a2df-e1adb0477806","Type":"ContainerStarted","Data":"1ad1a44c358ac86d4bb2f9487ee4a9d977416506a3b7f46ee16b72419080af48"} Feb 27 19:17:01 crc kubenswrapper[4981]: I0227 19:17:01.245290 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" podStartSLOduration=2.761023781 podStartE2EDuration="43.24526028s" podCreationTimestamp="2026-02-27 19:16:18 +0000 UTC" firstStartedPulling="2026-02-27 19:16:19.741740862 +0000 UTC m=+1879.220522022" lastFinishedPulling="2026-02-27 19:17:00.225977351 +0000 UTC m=+1919.704758521" observedRunningTime="2026-02-27 19:17:01.239452727 +0000 UTC m=+1920.718233887" watchObservedRunningTime="2026-02-27 19:17:01.24526028 +0000 UTC m=+1920.724041440" Feb 27 19:17:02 crc kubenswrapper[4981]: I0227 19:17:02.242381 4981 generic.go:334] "Generic (PLEG): container finished" podID="81024b85-8686-478d-b17e-7c599561675b" containerID="5332cf3f5b64f2f3b52b937ec64b6dff04cfa7fdd73b7dc4e33fe5d2c008f675" exitCode=0 Feb 27 19:17:02 crc kubenswrapper[4981]: I0227 19:17:02.242440 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tgsfm" event={"ID":"81024b85-8686-478d-b17e-7c599561675b","Type":"ContainerDied","Data":"5332cf3f5b64f2f3b52b937ec64b6dff04cfa7fdd73b7dc4e33fe5d2c008f675"} Feb 27 19:17:03 crc kubenswrapper[4981]: I0227 19:17:03.255886 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf0ef64-e29f-4945-a2df-e1adb0477806","Type":"ContainerStarted","Data":"d171d20b78507cfcb0d7e8ca83e10e8baebd90e967f0c79f21d759ca92c135d7"} Feb 27 19:17:03 crc kubenswrapper[4981]: I0227 19:17:03.256153 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbf0ef64-e29f-4945-a2df-e1adb0477806" containerName="ceilometer-central-agent" containerID="cri-o://ce6d2237386c71fe58768d5fc41ce360d9699bde69ba7ed831cda2f824103e2f" gracePeriod=30 Feb 27 19:17:03 crc kubenswrapper[4981]: I0227 19:17:03.256237 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbf0ef64-e29f-4945-a2df-e1adb0477806" containerName="ceilometer-notification-agent" containerID="cri-o://6f2ec3ef33889a3eebc60b5f33250f1ece331b74527f03423a47750db361ffc1" gracePeriod=30 Feb 27 19:17:03 crc kubenswrapper[4981]: I0227 19:17:03.256265 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbf0ef64-e29f-4945-a2df-e1adb0477806" containerName="sg-core" containerID="cri-o://1ad1a44c358ac86d4bb2f9487ee4a9d977416506a3b7f46ee16b72419080af48" gracePeriod=30 Feb 27 19:17:03 crc kubenswrapper[4981]: I0227 19:17:03.256301 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cbf0ef64-e29f-4945-a2df-e1adb0477806" containerName="proxy-httpd" containerID="cri-o://d171d20b78507cfcb0d7e8ca83e10e8baebd90e967f0c79f21d759ca92c135d7" gracePeriod=30 Feb 27 19:17:03 crc kubenswrapper[4981]: I0227 19:17:03.286781 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.330638762 podStartE2EDuration="19.286755574s" podCreationTimestamp="2026-02-27 19:16:44 +0000 UTC" firstStartedPulling="2026-02-27 19:16:45.760292097 +0000 UTC m=+1905.239073257" lastFinishedPulling="2026-02-27 19:17:02.716408909 +0000 UTC m=+1922.195190069" observedRunningTime="2026-02-27 19:17:03.27552638 +0000 UTC m=+1922.754307540" watchObservedRunningTime="2026-02-27 19:17:03.286755574 +0000 UTC m=+1922.765536734" Feb 27 19:17:03 crc kubenswrapper[4981]: I0227 19:17:03.608033 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tgsfm" Feb 27 19:17:03 crc kubenswrapper[4981]: I0227 19:17:03.660764 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81024b85-8686-478d-b17e-7c599561675b-combined-ca-bundle\") pod \"81024b85-8686-478d-b17e-7c599561675b\" (UID: \"81024b85-8686-478d-b17e-7c599561675b\") " Feb 27 19:17:03 crc kubenswrapper[4981]: I0227 19:17:03.660873 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81024b85-8686-478d-b17e-7c599561675b-config-data\") pod \"81024b85-8686-478d-b17e-7c599561675b\" (UID: \"81024b85-8686-478d-b17e-7c599561675b\") " Feb 27 19:17:03 crc kubenswrapper[4981]: I0227 19:17:03.660922 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfkv6\" (UniqueName: \"kubernetes.io/projected/81024b85-8686-478d-b17e-7c599561675b-kube-api-access-cfkv6\") pod \"81024b85-8686-478d-b17e-7c599561675b\" (UID: \"81024b85-8686-478d-b17e-7c599561675b\") " Feb 27 19:17:03 crc kubenswrapper[4981]: I0227 19:17:03.660981 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/81024b85-8686-478d-b17e-7c599561675b-db-sync-config-data\") pod \"81024b85-8686-478d-b17e-7c599561675b\" (UID: \"81024b85-8686-478d-b17e-7c599561675b\") " Feb 27 19:17:03 crc kubenswrapper[4981]: I0227 19:17:03.680263 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81024b85-8686-478d-b17e-7c599561675b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "81024b85-8686-478d-b17e-7c599561675b" (UID: "81024b85-8686-478d-b17e-7c599561675b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:03 crc kubenswrapper[4981]: I0227 19:17:03.684035 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81024b85-8686-478d-b17e-7c599561675b-kube-api-access-cfkv6" (OuterVolumeSpecName: "kube-api-access-cfkv6") pod "81024b85-8686-478d-b17e-7c599561675b" (UID: "81024b85-8686-478d-b17e-7c599561675b"). InnerVolumeSpecName "kube-api-access-cfkv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:17:03 crc kubenswrapper[4981]: I0227 19:17:03.698748 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81024b85-8686-478d-b17e-7c599561675b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81024b85-8686-478d-b17e-7c599561675b" (UID: "81024b85-8686-478d-b17e-7c599561675b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:03 crc kubenswrapper[4981]: I0227 19:17:03.753351 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81024b85-8686-478d-b17e-7c599561675b-config-data" (OuterVolumeSpecName: "config-data") pod "81024b85-8686-478d-b17e-7c599561675b" (UID: "81024b85-8686-478d-b17e-7c599561675b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:03 crc kubenswrapper[4981]: I0227 19:17:03.762983 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81024b85-8686-478d-b17e-7c599561675b-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:03 crc kubenswrapper[4981]: I0227 19:17:03.763017 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfkv6\" (UniqueName: \"kubernetes.io/projected/81024b85-8686-478d-b17e-7c599561675b-kube-api-access-cfkv6\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:03 crc kubenswrapper[4981]: I0227 19:17:03.763029 4981 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/81024b85-8686-478d-b17e-7c599561675b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:03 crc kubenswrapper[4981]: I0227 19:17:03.763038 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81024b85-8686-478d-b17e-7c599561675b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:04 crc kubenswrapper[4981]: I0227 19:17:04.266837 4981 generic.go:334] "Generic (PLEG): container finished" podID="cbf0ef64-e29f-4945-a2df-e1adb0477806" containerID="d171d20b78507cfcb0d7e8ca83e10e8baebd90e967f0c79f21d759ca92c135d7" exitCode=0 Feb 27 19:17:04 crc kubenswrapper[4981]: I0227 19:17:04.266909 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf0ef64-e29f-4945-a2df-e1adb0477806","Type":"ContainerDied","Data":"d171d20b78507cfcb0d7e8ca83e10e8baebd90e967f0c79f21d759ca92c135d7"} Feb 27 19:17:04 crc kubenswrapper[4981]: I0227 19:17:04.268630 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tgsfm" event={"ID":"81024b85-8686-478d-b17e-7c599561675b","Type":"ContainerDied","Data":"c6f5609ce7040998b69d84a08d753866c1be7d2d4264488ec4816b541b55dbb1"} Feb 27 19:17:04 crc kubenswrapper[4981]: I0227 19:17:04.268681 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6f5609ce7040998b69d84a08d753866c1be7d2d4264488ec4816b541b55dbb1" Feb 27 19:17:04 crc kubenswrapper[4981]: I0227 19:17:04.268694 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tgsfm" Feb 27 19:17:04 crc kubenswrapper[4981]: I0227 19:17:04.737238 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 27 19:17:04 crc kubenswrapper[4981]: I0227 19:17:04.777232 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 19:17:04 crc kubenswrapper[4981]: I0227 19:17:04.990113 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-x9cmq"] Feb 27 19:17:04 crc kubenswrapper[4981]: E0227 19:17:04.991745 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9" containerName="barbican-api" Feb 27 19:17:04 crc kubenswrapper[4981]: I0227 19:17:04.991836 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9" containerName="barbican-api" Feb 27 19:17:04 crc kubenswrapper[4981]: E0227 19:17:04.991901 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c81e863-b29e-405f-b9e5-9979b695bcd2" containerName="dnsmasq-dns" Feb 27 19:17:04 crc kubenswrapper[4981]: I0227 19:17:04.991968 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c81e863-b29e-405f-b9e5-9979b695bcd2" containerName="dnsmasq-dns" Feb 27 19:17:04 crc kubenswrapper[4981]: E0227 19:17:04.992069 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81024b85-8686-478d-b17e-7c599561675b" containerName="glance-db-sync" Feb 27 19:17:04 crc kubenswrapper[4981]: I0227 19:17:04.992122 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="81024b85-8686-478d-b17e-7c599561675b" containerName="glance-db-sync" Feb 27 19:17:04 crc kubenswrapper[4981]: E0227 19:17:04.992186 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c81e863-b29e-405f-b9e5-9979b695bcd2" containerName="init" Feb 27 19:17:04 crc kubenswrapper[4981]: I0227 19:17:04.992237 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c81e863-b29e-405f-b9e5-9979b695bcd2" containerName="init" Feb 27 19:17:04 crc kubenswrapper[4981]: E0227 19:17:04.992299 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9" containerName="barbican-api-log" Feb 27 19:17:04 crc kubenswrapper[4981]: I0227 19:17:04.992354 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9" containerName="barbican-api-log" Feb 27 19:17:04 crc kubenswrapper[4981]: I0227 19:17:04.992570 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9" containerName="barbican-api" Feb 27 19:17:04 crc kubenswrapper[4981]: I0227 19:17:04.992639 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="81024b85-8686-478d-b17e-7c599561675b" containerName="glance-db-sync" Feb 27 19:17:04 crc kubenswrapper[4981]: I0227 19:17:04.992704 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5aa1653-e421-4e93-a7d2-ed5f2e13c9a9" containerName="barbican-api-log" Feb 27 19:17:04 crc kubenswrapper[4981]: I0227 19:17:04.992755 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c81e863-b29e-405f-b9e5-9979b695bcd2" containerName="dnsmasq-dns" Feb 27 19:17:04 crc kubenswrapper[4981]: I0227 19:17:04.993701 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.005703 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-x9cmq"] Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.086033 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-x9cmq\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.086167 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-config\") pod \"dnsmasq-dns-795f4db4bc-x9cmq\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.086320 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-x9cmq\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.086501 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-x9cmq\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.086534 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px2m2\" (UniqueName: \"kubernetes.io/projected/41cb346d-f755-45dd-bb9d-3a972eada0b0-kube-api-access-px2m2\") pod \"dnsmasq-dns-795f4db4bc-x9cmq\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.086584 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-x9cmq\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.188120 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-x9cmq\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.188199 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-x9cmq\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.188254 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-config\") pod \"dnsmasq-dns-795f4db4bc-x9cmq\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.188302 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-x9cmq\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.188395 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-x9cmq\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.188620 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px2m2\" (UniqueName: \"kubernetes.io/projected/41cb346d-f755-45dd-bb9d-3a972eada0b0-kube-api-access-px2m2\") pod \"dnsmasq-dns-795f4db4bc-x9cmq\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.189379 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-x9cmq\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.189477 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-config\") pod \"dnsmasq-dns-795f4db4bc-x9cmq\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.189831 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-x9cmq\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.190040 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-x9cmq\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.190142 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-x9cmq\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.206339 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px2m2\" (UniqueName: \"kubernetes.io/projected/41cb346d-f755-45dd-bb9d-3a972eada0b0-kube-api-access-px2m2\") pod \"dnsmasq-dns-795f4db4bc-x9cmq\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.290334 4981 generic.go:334] "Generic (PLEG): container finished" podID="cbf0ef64-e29f-4945-a2df-e1adb0477806" containerID="1ad1a44c358ac86d4bb2f9487ee4a9d977416506a3b7f46ee16b72419080af48" exitCode=2 Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.290384 4981 generic.go:334] "Generic (PLEG): container finished" podID="cbf0ef64-e29f-4945-a2df-e1adb0477806" containerID="6f2ec3ef33889a3eebc60b5f33250f1ece331b74527f03423a47750db361ffc1" exitCode=0 Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.290699 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2209696a-9590-49d4-b70f-3c86d1cc62f2" containerName="cinder-scheduler" containerID="cri-o://133e7fa1c7d8e7886810e3e58c2277588cf4fea88ff4893dc670ee892e7c6ab7" gracePeriod=30 Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.291197 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf0ef64-e29f-4945-a2df-e1adb0477806","Type":"ContainerDied","Data":"1ad1a44c358ac86d4bb2f9487ee4a9d977416506a3b7f46ee16b72419080af48"} Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.291230 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf0ef64-e29f-4945-a2df-e1adb0477806","Type":"ContainerDied","Data":"6f2ec3ef33889a3eebc60b5f33250f1ece331b74527f03423a47750db361ffc1"} Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.291578 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2209696a-9590-49d4-b70f-3c86d1cc62f2" containerName="probe" containerID="cri-o://ce4d7227144ca8584a4d9256a7aa932093994a6e13b790a0f4a256dbf86d8129" gracePeriod=30 Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.331675 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.839760 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-x9cmq"] Feb 27 19:17:05 crc kubenswrapper[4981]: W0227 19:17:05.859956 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41cb346d_f755_45dd_bb9d_3a972eada0b0.slice/crio-fab8e76902461b61391d56a0246e0d10dbc41c912795c49d3d0de5d9cae1bdc3 WatchSource:0}: Error finding container fab8e76902461b61391d56a0246e0d10dbc41c912795c49d3d0de5d9cae1bdc3: Status 404 returned error can't find the container with id fab8e76902461b61391d56a0246e0d10dbc41c912795c49d3d0de5d9cae1bdc3 Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.889028 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.891492 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.900816 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fm8mn" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.902993 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.905337 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 27 19:17:05 crc kubenswrapper[4981]: I0227 19:17:05.905551 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.025596 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.025683 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43ae1623-7ee3-4b18-bb82-aa177354f327-scripts\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.025764 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43ae1623-7ee3-4b18-bb82-aa177354f327-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.025883 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67d6g\" (UniqueName: \"kubernetes.io/projected/43ae1623-7ee3-4b18-bb82-aa177354f327-kube-api-access-67d6g\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.025925 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ae1623-7ee3-4b18-bb82-aa177354f327-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.025944 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ae1623-7ee3-4b18-bb82-aa177354f327-config-data\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.025990 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43ae1623-7ee3-4b18-bb82-aa177354f327-logs\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.128025 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67d6g\" (UniqueName: \"kubernetes.io/projected/43ae1623-7ee3-4b18-bb82-aa177354f327-kube-api-access-67d6g\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.128117 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ae1623-7ee3-4b18-bb82-aa177354f327-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.128142 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ae1623-7ee3-4b18-bb82-aa177354f327-config-data\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.128183 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43ae1623-7ee3-4b18-bb82-aa177354f327-logs\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.128296 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.128353 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43ae1623-7ee3-4b18-bb82-aa177354f327-scripts\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.128415 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43ae1623-7ee3-4b18-bb82-aa177354f327-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.129020 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43ae1623-7ee3-4b18-bb82-aa177354f327-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.129762 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.129959 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43ae1623-7ee3-4b18-bb82-aa177354f327-logs\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.134630 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43ae1623-7ee3-4b18-bb82-aa177354f327-scripts\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.139489 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ae1623-7ee3-4b18-bb82-aa177354f327-config-data\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.140352 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ae1623-7ee3-4b18-bb82-aa177354f327-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.148723 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.150711 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.153998 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.161025 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.177439 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.181878 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67d6g\" (UniqueName: \"kubernetes.io/projected/43ae1623-7ee3-4b18-bb82-aa177354f327-kube-api-access-67d6g\") pod \"glance-default-external-api-0\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.240279 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kp7th" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.240330 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kp7th" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.300881 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" event={"ID":"41cb346d-f755-45dd-bb9d-3a972eada0b0","Type":"ContainerStarted","Data":"fab8e76902461b61391d56a0246e0d10dbc41c912795c49d3d0de5d9cae1bdc3"} Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.316331 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.341739 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.342032 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3832dc0a-7cc6-4207-a928-7888e8efee3f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.342137 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3832dc0a-7cc6-4207-a928-7888e8efee3f-logs\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.342199 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3832dc0a-7cc6-4207-a928-7888e8efee3f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.342276 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3832dc0a-7cc6-4207-a928-7888e8efee3f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.342328 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3832dc0a-7cc6-4207-a928-7888e8efee3f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.342481 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pm4g\" (UniqueName: \"kubernetes.io/projected/3832dc0a-7cc6-4207-a928-7888e8efee3f-kube-api-access-4pm4g\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.444543 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.444642 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3832dc0a-7cc6-4207-a928-7888e8efee3f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.444664 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3832dc0a-7cc6-4207-a928-7888e8efee3f-logs\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.444689 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3832dc0a-7cc6-4207-a928-7888e8efee3f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.444721 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3832dc0a-7cc6-4207-a928-7888e8efee3f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.444742 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3832dc0a-7cc6-4207-a928-7888e8efee3f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.444769 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pm4g\" (UniqueName: \"kubernetes.io/projected/3832dc0a-7cc6-4207-a928-7888e8efee3f-kube-api-access-4pm4g\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.445477 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3832dc0a-7cc6-4207-a928-7888e8efee3f-logs\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.446257 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3832dc0a-7cc6-4207-a928-7888e8efee3f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.446355 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.453570 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3832dc0a-7cc6-4207-a928-7888e8efee3f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.454319 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3832dc0a-7cc6-4207-a928-7888e8efee3f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.454627 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3832dc0a-7cc6-4207-a928-7888e8efee3f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.466323 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pm4g\" (UniqueName: \"kubernetes.io/projected/3832dc0a-7cc6-4207-a928-7888e8efee3f-kube-api-access-4pm4g\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.489418 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.773824 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:06 crc kubenswrapper[4981]: I0227 19:17:06.939173 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 19:17:06 crc kubenswrapper[4981]: W0227 19:17:06.939762 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ae1623_7ee3_4b18_bb82_aa177354f327.slice/crio-84d36590d14b2a02ab93c4e30f07d48b9f1f3face7cf2c6dd3e36e2fdb3159d2 WatchSource:0}: Error finding container 84d36590d14b2a02ab93c4e30f07d48b9f1f3face7cf2c6dd3e36e2fdb3159d2: Status 404 returned error can't find the container with id 84d36590d14b2a02ab93c4e30f07d48b9f1f3face7cf2c6dd3e36e2fdb3159d2 Feb 27 19:17:07 crc kubenswrapper[4981]: I0227 19:17:07.291377 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kp7th" podUID="ecdc9fdc-fd28-4bdc-8403-934067c2ec3d" containerName="registry-server" probeResult="failure" output=< Feb 27 19:17:07 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Feb 27 19:17:07 crc kubenswrapper[4981]: > Feb 27 19:17:07 crc kubenswrapper[4981]: I0227 19:17:07.316517 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" event={"ID":"41cb346d-f755-45dd-bb9d-3a972eada0b0","Type":"ContainerStarted","Data":"855f58196bfd9a19e38237c0925f91032b05a5e57aa28071294b7bf59ecdbfdb"} Feb 27 19:17:07 crc kubenswrapper[4981]: I0227 19:17:07.328711 4981 generic.go:334] "Generic (PLEG): container finished" podID="2209696a-9590-49d4-b70f-3c86d1cc62f2" containerID="ce4d7227144ca8584a4d9256a7aa932093994a6e13b790a0f4a256dbf86d8129" exitCode=0 Feb 27 19:17:07 crc kubenswrapper[4981]: I0227 19:17:07.328793 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2209696a-9590-49d4-b70f-3c86d1cc62f2","Type":"ContainerDied","Data":"ce4d7227144ca8584a4d9256a7aa932093994a6e13b790a0f4a256dbf86d8129"} Feb 27 19:17:07 crc kubenswrapper[4981]: I0227 19:17:07.330351 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43ae1623-7ee3-4b18-bb82-aa177354f327","Type":"ContainerStarted","Data":"84d36590d14b2a02ab93c4e30f07d48b9f1f3face7cf2c6dd3e36e2fdb3159d2"} Feb 27 19:17:07 crc kubenswrapper[4981]: W0227 19:17:07.336790 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3832dc0a_7cc6_4207_a928_7888e8efee3f.slice/crio-eee2c9b90d3c2a2f3fadd34e99ae2d7f656e452062b8c0ad6ad1b9f71e31b190 WatchSource:0}: Error finding container eee2c9b90d3c2a2f3fadd34e99ae2d7f656e452062b8c0ad6ad1b9f71e31b190: Status 404 returned error can't find the container with id eee2c9b90d3c2a2f3fadd34e99ae2d7f656e452062b8c0ad6ad1b9f71e31b190 Feb 27 19:17:07 crc kubenswrapper[4981]: I0227 19:17:07.342541 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 19:17:07 crc kubenswrapper[4981]: I0227 19:17:07.630393 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:17:07 crc kubenswrapper[4981]: E0227 19:17:07.630963 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.410624 4981 generic.go:334] "Generic (PLEG): container finished" podID="2209696a-9590-49d4-b70f-3c86d1cc62f2" containerID="133e7fa1c7d8e7886810e3e58c2277588cf4fea88ff4893dc670ee892e7c6ab7" exitCode=0 Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.413302 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2209696a-9590-49d4-b70f-3c86d1cc62f2","Type":"ContainerDied","Data":"133e7fa1c7d8e7886810e3e58c2277588cf4fea88ff4893dc670ee892e7c6ab7"} Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.418764 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3832dc0a-7cc6-4207-a928-7888e8efee3f","Type":"ContainerStarted","Data":"568583953d6093f1d60b2ba27dc4cb9c593eb884e21cfcf4f13ea7f777856965"} Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.418816 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3832dc0a-7cc6-4207-a928-7888e8efee3f","Type":"ContainerStarted","Data":"eee2c9b90d3c2a2f3fadd34e99ae2d7f656e452062b8c0ad6ad1b9f71e31b190"} Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.443920 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43ae1623-7ee3-4b18-bb82-aa177354f327","Type":"ContainerStarted","Data":"960714bfdb9322ccdb21cc4dfccbeb77cb41f8d69cf4cc989ddd293036a71f62"} Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.446746 4981 generic.go:334] "Generic (PLEG): container finished" podID="41cb346d-f755-45dd-bb9d-3a972eada0b0" containerID="855f58196bfd9a19e38237c0925f91032b05a5e57aa28071294b7bf59ecdbfdb" exitCode=0 Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.446800 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" event={"ID":"41cb346d-f755-45dd-bb9d-3a972eada0b0","Type":"ContainerDied","Data":"855f58196bfd9a19e38237c0925f91032b05a5e57aa28071294b7bf59ecdbfdb"} Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.548226 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.671796 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.716671 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-scripts\") pod \"2209696a-9590-49d4-b70f-3c86d1cc62f2\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.717095 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kgw5\" (UniqueName: \"kubernetes.io/projected/2209696a-9590-49d4-b70f-3c86d1cc62f2-kube-api-access-9kgw5\") pod \"2209696a-9590-49d4-b70f-3c86d1cc62f2\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.717298 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2209696a-9590-49d4-b70f-3c86d1cc62f2-etc-machine-id\") pod \"2209696a-9590-49d4-b70f-3c86d1cc62f2\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.717398 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-config-data\") pod \"2209696a-9590-49d4-b70f-3c86d1cc62f2\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.717388 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2209696a-9590-49d4-b70f-3c86d1cc62f2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2209696a-9590-49d4-b70f-3c86d1cc62f2" (UID: "2209696a-9590-49d4-b70f-3c86d1cc62f2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.718293 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-combined-ca-bundle\") pod \"2209696a-9590-49d4-b70f-3c86d1cc62f2\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.718454 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-config-data-custom\") pod \"2209696a-9590-49d4-b70f-3c86d1cc62f2\" (UID: \"2209696a-9590-49d4-b70f-3c86d1cc62f2\") " Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.719178 4981 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2209696a-9590-49d4-b70f-3c86d1cc62f2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.721574 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-scripts" (OuterVolumeSpecName: "scripts") pod "2209696a-9590-49d4-b70f-3c86d1cc62f2" (UID: "2209696a-9590-49d4-b70f-3c86d1cc62f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.723876 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2209696a-9590-49d4-b70f-3c86d1cc62f2-kube-api-access-9kgw5" (OuterVolumeSpecName: "kube-api-access-9kgw5") pod "2209696a-9590-49d4-b70f-3c86d1cc62f2" (UID: "2209696a-9590-49d4-b70f-3c86d1cc62f2"). InnerVolumeSpecName "kube-api-access-9kgw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.726591 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2209696a-9590-49d4-b70f-3c86d1cc62f2" (UID: "2209696a-9590-49d4-b70f-3c86d1cc62f2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.753970 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.822138 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kgw5\" (UniqueName: \"kubernetes.io/projected/2209696a-9590-49d4-b70f-3c86d1cc62f2-kube-api-access-9kgw5\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.822173 4981 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.822188 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.867206 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2209696a-9590-49d4-b70f-3c86d1cc62f2" (UID: "2209696a-9590-49d4-b70f-3c86d1cc62f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.902201 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-config-data" (OuterVolumeSpecName: "config-data") pod "2209696a-9590-49d4-b70f-3c86d1cc62f2" (UID: "2209696a-9590-49d4-b70f-3c86d1cc62f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.925935 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:08 crc kubenswrapper[4981]: I0227 19:17:08.925972 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2209696a-9590-49d4-b70f-3c86d1cc62f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.458951 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2209696a-9590-49d4-b70f-3c86d1cc62f2","Type":"ContainerDied","Data":"f1ab996547dd15252ea3ce7f6d86a9fb2d39c1a4318e5a2f88f27eb49b4f4e37"} Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.458968 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.459352 4981 scope.go:117] "RemoveContainer" containerID="ce4d7227144ca8584a4d9256a7aa932093994a6e13b790a0f4a256dbf86d8129" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.461220 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43ae1623-7ee3-4b18-bb82-aa177354f327","Type":"ContainerStarted","Data":"ee1beae4adf688b9d46b9f3dd39cba400ca8551a639f862cc5873a822addc8ca"} Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.461386 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="43ae1623-7ee3-4b18-bb82-aa177354f327" containerName="glance-httpd" containerID="cri-o://ee1beae4adf688b9d46b9f3dd39cba400ca8551a639f862cc5873a822addc8ca" gracePeriod=30 Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.461495 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="43ae1623-7ee3-4b18-bb82-aa177354f327" containerName="glance-log" containerID="cri-o://960714bfdb9322ccdb21cc4dfccbeb77cb41f8d69cf4cc989ddd293036a71f62" gracePeriod=30 Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.473945 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" event={"ID":"41cb346d-f755-45dd-bb9d-3a972eada0b0","Type":"ContainerStarted","Data":"620caf60abe754384b34a3830b884ec113aaa713430f21d66c8fceead31fb1f9"} Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.474374 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.505199 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.505169122 podStartE2EDuration="5.505169122s" podCreationTimestamp="2026-02-27 19:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:17:09.499380039 +0000 UTC m=+1928.978161209" watchObservedRunningTime="2026-02-27 19:17:09.505169122 +0000 UTC m=+1928.983950282" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.536858 4981 scope.go:117] "RemoveContainer" containerID="133e7fa1c7d8e7886810e3e58c2277588cf4fea88ff4893dc670ee892e7c6ab7" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.567820 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" podStartSLOduration=5.5677930799999995 podStartE2EDuration="5.56779308s" podCreationTimestamp="2026-02-27 19:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:17:09.528813337 +0000 UTC m=+1929.007594497" watchObservedRunningTime="2026-02-27 19:17:09.56779308 +0000 UTC m=+1929.046574250" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.599503 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.659827 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.659877 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 19:17:09 crc kubenswrapper[4981]: E0227 19:17:09.660257 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2209696a-9590-49d4-b70f-3c86d1cc62f2" containerName="probe" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.660274 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="2209696a-9590-49d4-b70f-3c86d1cc62f2" containerName="probe" Feb 27 19:17:09 crc kubenswrapper[4981]: E0227 19:17:09.660300 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2209696a-9590-49d4-b70f-3c86d1cc62f2" containerName="cinder-scheduler" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.660308 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="2209696a-9590-49d4-b70f-3c86d1cc62f2" containerName="cinder-scheduler" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.660526 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="2209696a-9590-49d4-b70f-3c86d1cc62f2" containerName="probe" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.660555 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="2209696a-9590-49d4-b70f-3c86d1cc62f2" containerName="cinder-scheduler" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.661806 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.661925 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.667822 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.681991 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-nw57q"] Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.684217 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-nw57q" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.698393 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-nw57q"] Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.774160 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-5kflz"] Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.775957 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5kflz" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.794850 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5kflz"] Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.848591 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed0c2fbd-f556-4dba-a374-4f212f96210a-operator-scripts\") pod \"nova-api-db-create-nw57q\" (UID: \"ed0c2fbd-f556-4dba-a374-4f212f96210a\") " pod="openstack/nova-api-db-create-nw57q" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.848642 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmdgd\" (UniqueName: \"kubernetes.io/projected/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-kube-api-access-bmdgd\") pod \"cinder-scheduler-0\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " pod="openstack/cinder-scheduler-0" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.848842 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " pod="openstack/cinder-scheduler-0" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.849042 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-config-data\") pod \"cinder-scheduler-0\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " pod="openstack/cinder-scheduler-0" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.849087 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-scripts\") pod \"cinder-scheduler-0\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " pod="openstack/cinder-scheduler-0" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.849208 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " pod="openstack/cinder-scheduler-0" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.849302 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " pod="openstack/cinder-scheduler-0" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.849340 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g76qc\" (UniqueName: \"kubernetes.io/projected/ed0c2fbd-f556-4dba-a374-4f212f96210a-kube-api-access-g76qc\") pod \"nova-api-db-create-nw57q\" (UID: \"ed0c2fbd-f556-4dba-a374-4f212f96210a\") " pod="openstack/nova-api-db-create-nw57q" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.875115 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-badb-account-create-update-4hf5f"] Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.886411 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-badb-account-create-update-4hf5f" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.891187 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-badb-account-create-update-4hf5f"] Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.911288 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.951170 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " pod="openstack/cinder-scheduler-0" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.951562 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " pod="openstack/cinder-scheduler-0" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.951605 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g76qc\" (UniqueName: \"kubernetes.io/projected/ed0c2fbd-f556-4dba-a374-4f212f96210a-kube-api-access-g76qc\") pod \"nova-api-db-create-nw57q\" (UID: \"ed0c2fbd-f556-4dba-a374-4f212f96210a\") " pod="openstack/nova-api-db-create-nw57q" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.951645 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmdgd\" (UniqueName: \"kubernetes.io/projected/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-kube-api-access-bmdgd\") pod \"cinder-scheduler-0\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " pod="openstack/cinder-scheduler-0" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.951681 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed0c2fbd-f556-4dba-a374-4f212f96210a-operator-scripts\") pod \"nova-api-db-create-nw57q\" (UID: \"ed0c2fbd-f556-4dba-a374-4f212f96210a\") " pod="openstack/nova-api-db-create-nw57q" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.951713 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " pod="openstack/cinder-scheduler-0" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.951795 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-config-data\") pod \"cinder-scheduler-0\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " pod="openstack/cinder-scheduler-0" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.951829 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-scripts\") pod \"cinder-scheduler-0\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " pod="openstack/cinder-scheduler-0" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.951852 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-996h5\" (UniqueName: \"kubernetes.io/projected/19242b3f-f738-49a0-be1b-578a62ec5f22-kube-api-access-996h5\") pod \"nova-cell0-db-create-5kflz\" (UID: \"19242b3f-f738-49a0-be1b-578a62ec5f22\") " pod="openstack/nova-cell0-db-create-5kflz" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.951877 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19242b3f-f738-49a0-be1b-578a62ec5f22-operator-scripts\") pod \"nova-cell0-db-create-5kflz\" (UID: \"19242b3f-f738-49a0-be1b-578a62ec5f22\") " pod="openstack/nova-cell0-db-create-5kflz" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.952007 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " pod="openstack/cinder-scheduler-0" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.953461 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed0c2fbd-f556-4dba-a374-4f212f96210a-operator-scripts\") pod \"nova-api-db-create-nw57q\" (UID: \"ed0c2fbd-f556-4dba-a374-4f212f96210a\") " pod="openstack/nova-api-db-create-nw57q" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.959726 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " pod="openstack/cinder-scheduler-0" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.961513 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-scripts\") pod \"cinder-scheduler-0\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " pod="openstack/cinder-scheduler-0" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.961831 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " pod="openstack/cinder-scheduler-0" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.964750 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-config-data\") pod \"cinder-scheduler-0\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " pod="openstack/cinder-scheduler-0" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.977664 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmdgd\" (UniqueName: \"kubernetes.io/projected/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-kube-api-access-bmdgd\") pod \"cinder-scheduler-0\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " pod="openstack/cinder-scheduler-0" Feb 27 19:17:09 crc kubenswrapper[4981]: I0227 19:17:09.996636 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g76qc\" (UniqueName: \"kubernetes.io/projected/ed0c2fbd-f556-4dba-a374-4f212f96210a-kube-api-access-g76qc\") pod \"nova-api-db-create-nw57q\" (UID: \"ed0c2fbd-f556-4dba-a374-4f212f96210a\") " pod="openstack/nova-api-db-create-nw57q" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.003869 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.024404 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-nw57q" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.053781 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-996h5\" (UniqueName: \"kubernetes.io/projected/19242b3f-f738-49a0-be1b-578a62ec5f22-kube-api-access-996h5\") pod \"nova-cell0-db-create-5kflz\" (UID: \"19242b3f-f738-49a0-be1b-578a62ec5f22\") " pod="openstack/nova-cell0-db-create-5kflz" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.053839 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlp42\" (UniqueName: \"kubernetes.io/projected/eccdd187-3938-4331-82f9-b5dac2e9c1c1-kube-api-access-dlp42\") pod \"nova-api-badb-account-create-update-4hf5f\" (UID: \"eccdd187-3938-4331-82f9-b5dac2e9c1c1\") " pod="openstack/nova-api-badb-account-create-update-4hf5f" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.053873 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19242b3f-f738-49a0-be1b-578a62ec5f22-operator-scripts\") pod \"nova-cell0-db-create-5kflz\" (UID: \"19242b3f-f738-49a0-be1b-578a62ec5f22\") " pod="openstack/nova-cell0-db-create-5kflz" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.053923 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eccdd187-3938-4331-82f9-b5dac2e9c1c1-operator-scripts\") pod \"nova-api-badb-account-create-update-4hf5f\" (UID: \"eccdd187-3938-4331-82f9-b5dac2e9c1c1\") " pod="openstack/nova-api-badb-account-create-update-4hf5f" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.055795 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19242b3f-f738-49a0-be1b-578a62ec5f22-operator-scripts\") pod \"nova-cell0-db-create-5kflz\" (UID: \"19242b3f-f738-49a0-be1b-578a62ec5f22\") " pod="openstack/nova-cell0-db-create-5kflz" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.079590 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-hpvzf"] Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.080811 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hpvzf" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.090938 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-996h5\" (UniqueName: \"kubernetes.io/projected/19242b3f-f738-49a0-be1b-578a62ec5f22-kube-api-access-996h5\") pod \"nova-cell0-db-create-5kflz\" (UID: \"19242b3f-f738-49a0-be1b-578a62ec5f22\") " pod="openstack/nova-cell0-db-create-5kflz" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.107014 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5kflz" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.123714 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-hpvzf"] Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.133368 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-bbcc-account-create-update-rw9k7"] Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.135001 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bbcc-account-create-update-rw9k7" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.139662 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.152629 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-bbcc-account-create-update-rw9k7"] Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.156360 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlp42\" (UniqueName: \"kubernetes.io/projected/eccdd187-3938-4331-82f9-b5dac2e9c1c1-kube-api-access-dlp42\") pod \"nova-api-badb-account-create-update-4hf5f\" (UID: \"eccdd187-3938-4331-82f9-b5dac2e9c1c1\") " pod="openstack/nova-api-badb-account-create-update-4hf5f" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.156772 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eccdd187-3938-4331-82f9-b5dac2e9c1c1-operator-scripts\") pod \"nova-api-badb-account-create-update-4hf5f\" (UID: \"eccdd187-3938-4331-82f9-b5dac2e9c1c1\") " pod="openstack/nova-api-badb-account-create-update-4hf5f" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.157724 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eccdd187-3938-4331-82f9-b5dac2e9c1c1-operator-scripts\") pod \"nova-api-badb-account-create-update-4hf5f\" (UID: \"eccdd187-3938-4331-82f9-b5dac2e9c1c1\") " pod="openstack/nova-api-badb-account-create-update-4hf5f" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.212687 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlp42\" (UniqueName: \"kubernetes.io/projected/eccdd187-3938-4331-82f9-b5dac2e9c1c1-kube-api-access-dlp42\") pod \"nova-api-badb-account-create-update-4hf5f\" (UID: \"eccdd187-3938-4331-82f9-b5dac2e9c1c1\") " pod="openstack/nova-api-badb-account-create-update-4hf5f" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.224527 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-badb-account-create-update-4hf5f" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.258662 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzfmc\" (UniqueName: \"kubernetes.io/projected/2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32-kube-api-access-gzfmc\") pod \"nova-cell0-bbcc-account-create-update-rw9k7\" (UID: \"2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32\") " pod="openstack/nova-cell0-bbcc-account-create-update-rw9k7" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.258786 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpd6q\" (UniqueName: \"kubernetes.io/projected/99c593ba-9134-4372-8392-6903d47aba28-kube-api-access-jpd6q\") pod \"nova-cell1-db-create-hpvzf\" (UID: \"99c593ba-9134-4372-8392-6903d47aba28\") " pod="openstack/nova-cell1-db-create-hpvzf" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.258874 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99c593ba-9134-4372-8392-6903d47aba28-operator-scripts\") pod \"nova-cell1-db-create-hpvzf\" (UID: \"99c593ba-9134-4372-8392-6903d47aba28\") " pod="openstack/nova-cell1-db-create-hpvzf" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.258920 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32-operator-scripts\") pod \"nova-cell0-bbcc-account-create-update-rw9k7\" (UID: \"2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32\") " pod="openstack/nova-cell0-bbcc-account-create-update-rw9k7" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.315440 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-a620-account-create-update-4fnbx"] Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.317420 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a620-account-create-update-4fnbx" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.319529 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.360202 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzfmc\" (UniqueName: \"kubernetes.io/projected/2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32-kube-api-access-gzfmc\") pod \"nova-cell0-bbcc-account-create-update-rw9k7\" (UID: \"2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32\") " pod="openstack/nova-cell0-bbcc-account-create-update-rw9k7" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.360293 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpd6q\" (UniqueName: \"kubernetes.io/projected/99c593ba-9134-4372-8392-6903d47aba28-kube-api-access-jpd6q\") pod \"nova-cell1-db-create-hpvzf\" (UID: \"99c593ba-9134-4372-8392-6903d47aba28\") " pod="openstack/nova-cell1-db-create-hpvzf" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.360346 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99c593ba-9134-4372-8392-6903d47aba28-operator-scripts\") pod \"nova-cell1-db-create-hpvzf\" (UID: \"99c593ba-9134-4372-8392-6903d47aba28\") " pod="openstack/nova-cell1-db-create-hpvzf" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.360362 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32-operator-scripts\") pod \"nova-cell0-bbcc-account-create-update-rw9k7\" (UID: \"2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32\") " pod="openstack/nova-cell0-bbcc-account-create-update-rw9k7" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.361538 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32-operator-scripts\") pod \"nova-cell0-bbcc-account-create-update-rw9k7\" (UID: \"2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32\") " pod="openstack/nova-cell0-bbcc-account-create-update-rw9k7" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.362038 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-a620-account-create-update-4fnbx"] Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.363550 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99c593ba-9134-4372-8392-6903d47aba28-operator-scripts\") pod \"nova-cell1-db-create-hpvzf\" (UID: \"99c593ba-9134-4372-8392-6903d47aba28\") " pod="openstack/nova-cell1-db-create-hpvzf" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.384235 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzfmc\" (UniqueName: \"kubernetes.io/projected/2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32-kube-api-access-gzfmc\") pod \"nova-cell0-bbcc-account-create-update-rw9k7\" (UID: \"2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32\") " pod="openstack/nova-cell0-bbcc-account-create-update-rw9k7" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.388122 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpd6q\" (UniqueName: \"kubernetes.io/projected/99c593ba-9134-4372-8392-6903d47aba28-kube-api-access-jpd6q\") pod \"nova-cell1-db-create-hpvzf\" (UID: \"99c593ba-9134-4372-8392-6903d47aba28\") " pod="openstack/nova-cell1-db-create-hpvzf" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.445237 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hpvzf" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.466102 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eac1eaf7-6fea-4dae-b8f3-b81615d30ee0-operator-scripts\") pod \"nova-cell1-a620-account-create-update-4fnbx\" (UID: \"eac1eaf7-6fea-4dae-b8f3-b81615d30ee0\") " pod="openstack/nova-cell1-a620-account-create-update-4fnbx" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.466175 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps47h\" (UniqueName: \"kubernetes.io/projected/eac1eaf7-6fea-4dae-b8f3-b81615d30ee0-kube-api-access-ps47h\") pod \"nova-cell1-a620-account-create-update-4fnbx\" (UID: \"eac1eaf7-6fea-4dae-b8f3-b81615d30ee0\") " pod="openstack/nova-cell1-a620-account-create-update-4fnbx" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.490870 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bbcc-account-create-update-rw9k7" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.508118 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3832dc0a-7cc6-4207-a928-7888e8efee3f","Type":"ContainerStarted","Data":"cafb740cb4b2feea0c56af385f6a4dd4d0f8dbdd880b5a3bb59ad79c5321cb28"} Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.508130 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3832dc0a-7cc6-4207-a928-7888e8efee3f" containerName="glance-log" containerID="cri-o://568583953d6093f1d60b2ba27dc4cb9c593eb884e21cfcf4f13ea7f777856965" gracePeriod=30 Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.508281 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3832dc0a-7cc6-4207-a928-7888e8efee3f" containerName="glance-httpd" containerID="cri-o://cafb740cb4b2feea0c56af385f6a4dd4d0f8dbdd880b5a3bb59ad79c5321cb28" gracePeriod=30 Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.530767 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.530723168 podStartE2EDuration="5.530723168s" podCreationTimestamp="2026-02-27 19:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:17:10.529748208 +0000 UTC m=+1930.008529378" watchObservedRunningTime="2026-02-27 19:17:10.530723168 +0000 UTC m=+1930.009504328" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.530963 4981 generic.go:334] "Generic (PLEG): container finished" podID="43ae1623-7ee3-4b18-bb82-aa177354f327" containerID="ee1beae4adf688b9d46b9f3dd39cba400ca8551a639f862cc5873a822addc8ca" exitCode=0 Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.530990 4981 generic.go:334] "Generic (PLEG): container finished" podID="43ae1623-7ee3-4b18-bb82-aa177354f327" containerID="960714bfdb9322ccdb21cc4dfccbeb77cb41f8d69cf4cc989ddd293036a71f62" exitCode=143 Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.531168 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43ae1623-7ee3-4b18-bb82-aa177354f327","Type":"ContainerDied","Data":"ee1beae4adf688b9d46b9f3dd39cba400ca8551a639f862cc5873a822addc8ca"} Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.531198 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43ae1623-7ee3-4b18-bb82-aa177354f327","Type":"ContainerDied","Data":"960714bfdb9322ccdb21cc4dfccbeb77cb41f8d69cf4cc989ddd293036a71f62"} Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.570482 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps47h\" (UniqueName: \"kubernetes.io/projected/eac1eaf7-6fea-4dae-b8f3-b81615d30ee0-kube-api-access-ps47h\") pod \"nova-cell1-a620-account-create-update-4fnbx\" (UID: \"eac1eaf7-6fea-4dae-b8f3-b81615d30ee0\") " pod="openstack/nova-cell1-a620-account-create-update-4fnbx" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.570740 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eac1eaf7-6fea-4dae-b8f3-b81615d30ee0-operator-scripts\") pod \"nova-cell1-a620-account-create-update-4fnbx\" (UID: \"eac1eaf7-6fea-4dae-b8f3-b81615d30ee0\") " pod="openstack/nova-cell1-a620-account-create-update-4fnbx" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.576722 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eac1eaf7-6fea-4dae-b8f3-b81615d30ee0-operator-scripts\") pod \"nova-cell1-a620-account-create-update-4fnbx\" (UID: \"eac1eaf7-6fea-4dae-b8f3-b81615d30ee0\") " pod="openstack/nova-cell1-a620-account-create-update-4fnbx" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.618392 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps47h\" (UniqueName: \"kubernetes.io/projected/eac1eaf7-6fea-4dae-b8f3-b81615d30ee0-kube-api-access-ps47h\") pod \"nova-cell1-a620-account-create-update-4fnbx\" (UID: \"eac1eaf7-6fea-4dae-b8f3-b81615d30ee0\") " pod="openstack/nova-cell1-a620-account-create-update-4fnbx" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.643503 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.675470 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.773903 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-nw57q"] Feb 27 19:17:10 crc kubenswrapper[4981]: W0227 19:17:10.814223 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded0c2fbd_f556_4dba_a374_4f212f96210a.slice/crio-3ca06c4efc04d1e7ea906a1c6734c7e41a265691c91cddb71d9977eea570a0b3 WatchSource:0}: Error finding container 3ca06c4efc04d1e7ea906a1c6734c7e41a265691c91cddb71d9977eea570a0b3: Status 404 returned error can't find the container with id 3ca06c4efc04d1e7ea906a1c6734c7e41a265691c91cddb71d9977eea570a0b3 Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.890570 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a620-account-create-update-4fnbx" Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.910863 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5kflz"] Feb 27 19:17:10 crc kubenswrapper[4981]: I0227 19:17:10.927924 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-badb-account-create-update-4hf5f"] Feb 27 19:17:10 crc kubenswrapper[4981]: W0227 19:17:10.928830 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeccdd187_3938_4331_82f9_b5dac2e9c1c1.slice/crio-3fa08cae0d19eff44fe049538c47338dc7fcb2d0cf205553c7cc9f98b137f3d0 WatchSource:0}: Error finding container 3fa08cae0d19eff44fe049538c47338dc7fcb2d0cf205553c7cc9f98b137f3d0: Status 404 returned error can't find the container with id 3fa08cae0d19eff44fe049538c47338dc7fcb2d0cf205553c7cc9f98b137f3d0 Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.100981 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-hpvzf"] Feb 27 19:17:11 crc kubenswrapper[4981]: W0227 19:17:11.136373 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99c593ba_9134_4372_8392_6903d47aba28.slice/crio-d83f9aa34dc1820bd2259be94f8526a6add249af3cfc7c8c0b128a80caa2cf27 WatchSource:0}: Error finding container d83f9aa34dc1820bd2259be94f8526a6add249af3cfc7c8c0b128a80caa2cf27: Status 404 returned error can't find the container with id d83f9aa34dc1820bd2259be94f8526a6add249af3cfc7c8c0b128a80caa2cf27 Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.389714 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-bbcc-account-create-update-rw9k7"] Feb 27 19:17:11 crc kubenswrapper[4981]: W0227 19:17:11.421176 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2568e8f5_ef00_4eb0_aec5_ee93e7bdeb32.slice/crio-39ff05dc031af9e09325c4e079bd5c68741f0d7647784d7d73b61621a09de604 WatchSource:0}: Error finding container 39ff05dc031af9e09325c4e079bd5c68741f0d7647784d7d73b61621a09de604: Status 404 returned error can't find the container with id 39ff05dc031af9e09325c4e079bd5c68741f0d7647784d7d73b61621a09de604 Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.524939 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-a620-account-create-update-4fnbx"] Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.598659 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a620-account-create-update-4fnbx" event={"ID":"eac1eaf7-6fea-4dae-b8f3-b81615d30ee0","Type":"ContainerStarted","Data":"f27bb6bb541cbc25d24487063a1e500ae58a09a7449db1ba1abe9f11d5a8e768"} Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.604902 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5kflz" event={"ID":"19242b3f-f738-49a0-be1b-578a62ec5f22","Type":"ContainerStarted","Data":"9bf87d74da42f41aead6e2511ae77fd0b199c2f99ec7fddc34b2d5c03a436bfa"} Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.605201 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5kflz" event={"ID":"19242b3f-f738-49a0-be1b-578a62ec5f22","Type":"ContainerStarted","Data":"8e6cd79bec3f9b61ee2b8fa458cbd7f9249cb5718c3b9e0ce423aa94fb8c5e9e"} Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.612569 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43ae1623-7ee3-4b18-bb82-aa177354f327","Type":"ContainerDied","Data":"84d36590d14b2a02ab93c4e30f07d48b9f1f3face7cf2c6dd3e36e2fdb3159d2"} Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.612614 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84d36590d14b2a02ab93c4e30f07d48b9f1f3face7cf2c6dd3e36e2fdb3159d2" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.614801 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bbcc-account-create-update-rw9k7" event={"ID":"2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32","Type":"ContainerStarted","Data":"39ff05dc031af9e09325c4e079bd5c68741f0d7647784d7d73b61621a09de604"} Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.624919 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-badb-account-create-update-4hf5f" event={"ID":"eccdd187-3938-4331-82f9-b5dac2e9c1c1","Type":"ContainerStarted","Data":"3fa08cae0d19eff44fe049538c47338dc7fcb2d0cf205553c7cc9f98b137f3d0"} Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.634979 4981 generic.go:334] "Generic (PLEG): container finished" podID="3832dc0a-7cc6-4207-a928-7888e8efee3f" containerID="cafb740cb4b2feea0c56af385f6a4dd4d0f8dbdd880b5a3bb59ad79c5321cb28" exitCode=0 Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.635015 4981 generic.go:334] "Generic (PLEG): container finished" podID="3832dc0a-7cc6-4207-a928-7888e8efee3f" containerID="568583953d6093f1d60b2ba27dc4cb9c593eb884e21cfcf4f13ea7f777856965" exitCode=143 Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.642226 4981 generic.go:334] "Generic (PLEG): container finished" podID="cbf0ef64-e29f-4945-a2df-e1adb0477806" containerID="ce6d2237386c71fe58768d5fc41ce360d9699bde69ba7ed831cda2f824103e2f" exitCode=0 Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.672187 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.680787 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2209696a-9590-49d4-b70f-3c86d1cc62f2" path="/var/lib/kubelet/pods/2209696a-9590-49d4-b70f-3c86d1cc62f2/volumes" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.690709 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3832dc0a-7cc6-4207-a928-7888e8efee3f","Type":"ContainerDied","Data":"cafb740cb4b2feea0c56af385f6a4dd4d0f8dbdd880b5a3bb59ad79c5321cb28"} Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.690753 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3832dc0a-7cc6-4207-a928-7888e8efee3f","Type":"ContainerDied","Data":"568583953d6093f1d60b2ba27dc4cb9c593eb884e21cfcf4f13ea7f777856965"} Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.690767 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3832dc0a-7cc6-4207-a928-7888e8efee3f","Type":"ContainerDied","Data":"eee2c9b90d3c2a2f3fadd34e99ae2d7f656e452062b8c0ad6ad1b9f71e31b190"} Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.690778 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eee2c9b90d3c2a2f3fadd34e99ae2d7f656e452062b8c0ad6ad1b9f71e31b190" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.690788 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf0ef64-e29f-4945-a2df-e1adb0477806","Type":"ContainerDied","Data":"ce6d2237386c71fe58768d5fc41ce360d9699bde69ba7ed831cda2f824103e2f"} Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.690803 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hpvzf" event={"ID":"99c593ba-9134-4372-8392-6903d47aba28","Type":"ContainerStarted","Data":"d83f9aa34dc1820bd2259be94f8526a6add249af3cfc7c8c0b128a80caa2cf27"} Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.690814 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-nw57q" event={"ID":"ed0c2fbd-f556-4dba-a374-4f212f96210a","Type":"ContainerStarted","Data":"ac77ba0d4f1fbd810f57dffbc4656f2f271323b9ae46ef49af6af868539f9fb8"} Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.690825 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-nw57q" event={"ID":"ed0c2fbd-f556-4dba-a374-4f212f96210a","Type":"ContainerStarted","Data":"3ca06c4efc04d1e7ea906a1c6734c7e41a265691c91cddb71d9977eea570a0b3"} Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.690854 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285","Type":"ContainerStarted","Data":"54ee163c5f54dfe8276af142dd6ae6adc80234a4c4ef6b8e77e8ee533dce1de1"} Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.713347 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.827374 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"3832dc0a-7cc6-4207-a928-7888e8efee3f\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.827449 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67d6g\" (UniqueName: \"kubernetes.io/projected/43ae1623-7ee3-4b18-bb82-aa177354f327-kube-api-access-67d6g\") pod \"43ae1623-7ee3-4b18-bb82-aa177354f327\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.827484 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43ae1623-7ee3-4b18-bb82-aa177354f327-scripts\") pod \"43ae1623-7ee3-4b18-bb82-aa177354f327\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.827511 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ae1623-7ee3-4b18-bb82-aa177354f327-config-data\") pod \"43ae1623-7ee3-4b18-bb82-aa177354f327\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.827599 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"43ae1623-7ee3-4b18-bb82-aa177354f327\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.827742 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3832dc0a-7cc6-4207-a928-7888e8efee3f-config-data\") pod \"3832dc0a-7cc6-4207-a928-7888e8efee3f\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.827826 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43ae1623-7ee3-4b18-bb82-aa177354f327-httpd-run\") pod \"43ae1623-7ee3-4b18-bb82-aa177354f327\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.827888 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3832dc0a-7cc6-4207-a928-7888e8efee3f-logs\") pod \"3832dc0a-7cc6-4207-a928-7888e8efee3f\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.827979 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ae1623-7ee3-4b18-bb82-aa177354f327-combined-ca-bundle\") pod \"43ae1623-7ee3-4b18-bb82-aa177354f327\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.828016 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3832dc0a-7cc6-4207-a928-7888e8efee3f-scripts\") pod \"3832dc0a-7cc6-4207-a928-7888e8efee3f\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.828042 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3832dc0a-7cc6-4207-a928-7888e8efee3f-httpd-run\") pod \"3832dc0a-7cc6-4207-a928-7888e8efee3f\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.828080 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43ae1623-7ee3-4b18-bb82-aa177354f327-logs\") pod \"43ae1623-7ee3-4b18-bb82-aa177354f327\" (UID: \"43ae1623-7ee3-4b18-bb82-aa177354f327\") " Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.828105 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3832dc0a-7cc6-4207-a928-7888e8efee3f-combined-ca-bundle\") pod \"3832dc0a-7cc6-4207-a928-7888e8efee3f\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.828167 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pm4g\" (UniqueName: \"kubernetes.io/projected/3832dc0a-7cc6-4207-a928-7888e8efee3f-kube-api-access-4pm4g\") pod \"3832dc0a-7cc6-4207-a928-7888e8efee3f\" (UID: \"3832dc0a-7cc6-4207-a928-7888e8efee3f\") " Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.846011 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43ae1623-7ee3-4b18-bb82-aa177354f327-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "43ae1623-7ee3-4b18-bb82-aa177354f327" (UID: "43ae1623-7ee3-4b18-bb82-aa177354f327"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.851290 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3832dc0a-7cc6-4207-a928-7888e8efee3f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3832dc0a-7cc6-4207-a928-7888e8efee3f" (UID: "3832dc0a-7cc6-4207-a928-7888e8efee3f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.851619 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "3832dc0a-7cc6-4207-a928-7888e8efee3f" (UID: "3832dc0a-7cc6-4207-a928-7888e8efee3f"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.853710 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43ae1623-7ee3-4b18-bb82-aa177354f327-logs" (OuterVolumeSpecName: "logs") pod "43ae1623-7ee3-4b18-bb82-aa177354f327" (UID: "43ae1623-7ee3-4b18-bb82-aa177354f327"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.855951 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ae1623-7ee3-4b18-bb82-aa177354f327-scripts" (OuterVolumeSpecName: "scripts") pod "43ae1623-7ee3-4b18-bb82-aa177354f327" (UID: "43ae1623-7ee3-4b18-bb82-aa177354f327"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.856381 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3832dc0a-7cc6-4207-a928-7888e8efee3f-logs" (OuterVolumeSpecName: "logs") pod "3832dc0a-7cc6-4207-a928-7888e8efee3f" (UID: "3832dc0a-7cc6-4207-a928-7888e8efee3f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.867709 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3832dc0a-7cc6-4207-a928-7888e8efee3f-kube-api-access-4pm4g" (OuterVolumeSpecName: "kube-api-access-4pm4g") pod "3832dc0a-7cc6-4207-a928-7888e8efee3f" (UID: "3832dc0a-7cc6-4207-a928-7888e8efee3f"). InnerVolumeSpecName "kube-api-access-4pm4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.883080 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "43ae1623-7ee3-4b18-bb82-aa177354f327" (UID: "43ae1623-7ee3-4b18-bb82-aa177354f327"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.883113 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43ae1623-7ee3-4b18-bb82-aa177354f327-kube-api-access-67d6g" (OuterVolumeSpecName: "kube-api-access-67d6g") pod "43ae1623-7ee3-4b18-bb82-aa177354f327" (UID: "43ae1623-7ee3-4b18-bb82-aa177354f327"). InnerVolumeSpecName "kube-api-access-67d6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.886826 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3832dc0a-7cc6-4207-a928-7888e8efee3f-scripts" (OuterVolumeSpecName: "scripts") pod "3832dc0a-7cc6-4207-a928-7888e8efee3f" (UID: "3832dc0a-7cc6-4207-a928-7888e8efee3f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.930760 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-nw57q" podStartSLOduration=2.930735725 podStartE2EDuration="2.930735725s" podCreationTimestamp="2026-02-27 19:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:17:11.886091503 +0000 UTC m=+1931.364872663" watchObservedRunningTime="2026-02-27 19:17:11.930735725 +0000 UTC m=+1931.409516885" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.931589 4981 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43ae1623-7ee3-4b18-bb82-aa177354f327-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.931608 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3832dc0a-7cc6-4207-a928-7888e8efee3f-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.931617 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3832dc0a-7cc6-4207-a928-7888e8efee3f-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.931625 4981 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3832dc0a-7cc6-4207-a928-7888e8efee3f-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.931634 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43ae1623-7ee3-4b18-bb82-aa177354f327-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.931643 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pm4g\" (UniqueName: \"kubernetes.io/projected/3832dc0a-7cc6-4207-a928-7888e8efee3f-kube-api-access-4pm4g\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.931675 4981 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.931687 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67d6g\" (UniqueName: \"kubernetes.io/projected/43ae1623-7ee3-4b18-bb82-aa177354f327-kube-api-access-67d6g\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.931697 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43ae1623-7ee3-4b18-bb82-aa177354f327-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.931715 4981 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 27 19:17:11 crc kubenswrapper[4981]: I0227 19:17:11.992862 4981 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.002544 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3832dc0a-7cc6-4207-a928-7888e8efee3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3832dc0a-7cc6-4207-a928-7888e8efee3f" (UID: "3832dc0a-7cc6-4207-a928-7888e8efee3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.003394 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.022445 4981 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.027902 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ae1623-7ee3-4b18-bb82-aa177354f327-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43ae1623-7ee3-4b18-bb82-aa177354f327" (UID: "43ae1623-7ee3-4b18-bb82-aa177354f327"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.036711 4981 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.036747 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ae1623-7ee3-4b18-bb82-aa177354f327-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.036776 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3832dc0a-7cc6-4207-a928-7888e8efee3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.036785 4981 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.052158 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ae1623-7ee3-4b18-bb82-aa177354f327-config-data" (OuterVolumeSpecName: "config-data") pod "43ae1623-7ee3-4b18-bb82-aa177354f327" (UID: "43ae1623-7ee3-4b18-bb82-aa177354f327"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.061491 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3832dc0a-7cc6-4207-a928-7888e8efee3f-config-data" (OuterVolumeSpecName: "config-data") pod "3832dc0a-7cc6-4207-a928-7888e8efee3f" (UID: "3832dc0a-7cc6-4207-a928-7888e8efee3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.138155 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-scripts\") pod \"cbf0ef64-e29f-4945-a2df-e1adb0477806\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.138520 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-sg-core-conf-yaml\") pod \"cbf0ef64-e29f-4945-a2df-e1adb0477806\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.138605 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-config-data\") pod \"cbf0ef64-e29f-4945-a2df-e1adb0477806\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.138654 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-combined-ca-bundle\") pod \"cbf0ef64-e29f-4945-a2df-e1adb0477806\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.138702 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7j5g\" (UniqueName: \"kubernetes.io/projected/cbf0ef64-e29f-4945-a2df-e1adb0477806-kube-api-access-p7j5g\") pod \"cbf0ef64-e29f-4945-a2df-e1adb0477806\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.138805 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf0ef64-e29f-4945-a2df-e1adb0477806-run-httpd\") pod \"cbf0ef64-e29f-4945-a2df-e1adb0477806\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.138898 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf0ef64-e29f-4945-a2df-e1adb0477806-log-httpd\") pod \"cbf0ef64-e29f-4945-a2df-e1adb0477806\" (UID: \"cbf0ef64-e29f-4945-a2df-e1adb0477806\") " Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.140129 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ae1623-7ee3-4b18-bb82-aa177354f327-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.140157 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3832dc0a-7cc6-4207-a928-7888e8efee3f-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.140517 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf0ef64-e29f-4945-a2df-e1adb0477806-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cbf0ef64-e29f-4945-a2df-e1adb0477806" (UID: "cbf0ef64-e29f-4945-a2df-e1adb0477806"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.143019 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf0ef64-e29f-4945-a2df-e1adb0477806-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cbf0ef64-e29f-4945-a2df-e1adb0477806" (UID: "cbf0ef64-e29f-4945-a2df-e1adb0477806"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.144972 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-scripts" (OuterVolumeSpecName: "scripts") pod "cbf0ef64-e29f-4945-a2df-e1adb0477806" (UID: "cbf0ef64-e29f-4945-a2df-e1adb0477806"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.146133 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf0ef64-e29f-4945-a2df-e1adb0477806-kube-api-access-p7j5g" (OuterVolumeSpecName: "kube-api-access-p7j5g") pod "cbf0ef64-e29f-4945-a2df-e1adb0477806" (UID: "cbf0ef64-e29f-4945-a2df-e1adb0477806"). InnerVolumeSpecName "kube-api-access-p7j5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.198875 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cbf0ef64-e29f-4945-a2df-e1adb0477806" (UID: "cbf0ef64-e29f-4945-a2df-e1adb0477806"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.218615 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbf0ef64-e29f-4945-a2df-e1adb0477806" (UID: "cbf0ef64-e29f-4945-a2df-e1adb0477806"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.244004 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7j5g\" (UniqueName: \"kubernetes.io/projected/cbf0ef64-e29f-4945-a2df-e1adb0477806-kube-api-access-p7j5g\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.244050 4981 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf0ef64-e29f-4945-a2df-e1adb0477806-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.244080 4981 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cbf0ef64-e29f-4945-a2df-e1adb0477806-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.244091 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.244103 4981 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.244118 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.281294 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-config-data" (OuterVolumeSpecName: "config-data") pod "cbf0ef64-e29f-4945-a2df-e1adb0477806" (UID: "cbf0ef64-e29f-4945-a2df-e1adb0477806"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.345820 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbf0ef64-e29f-4945-a2df-e1adb0477806-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.696120 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-badb-account-create-update-4hf5f" event={"ID":"eccdd187-3938-4331-82f9-b5dac2e9c1c1","Type":"ContainerStarted","Data":"0107fd337927931131bec521f05b370a528d6b80221a1a8d41f45e98551f9de5"} Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.701449 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a620-account-create-update-4fnbx" event={"ID":"eac1eaf7-6fea-4dae-b8f3-b81615d30ee0","Type":"ContainerStarted","Data":"85db6b13408ac6d2fc34958e4e235baace7ea42c2b7f4693b9c8c037070214e8"} Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.704992 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285","Type":"ContainerStarted","Data":"5504fe172b1dab98409d133dc9ed246af0545979ab9f8635024b3da89221f235"} Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.711221 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cbf0ef64-e29f-4945-a2df-e1adb0477806","Type":"ContainerDied","Data":"1b9eef2ff64381867bbd0fad8ea69edd4f3a08be73f687bf990b1275e5f979ff"} Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.711283 4981 scope.go:117] "RemoveContainer" containerID="d171d20b78507cfcb0d7e8ca83e10e8baebd90e967f0c79f21d759ca92c135d7" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.711434 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.723882 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bbcc-account-create-update-rw9k7" event={"ID":"2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32","Type":"ContainerStarted","Data":"d457a0124f7f00549b4901eaf6afb0ec7c6f599305643ac259338e9d66f9806a"} Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.727270 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.727301 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.727337 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hpvzf" event={"ID":"99c593ba-9134-4372-8392-6903d47aba28","Type":"ContainerStarted","Data":"25fda4f10dfd0f44cf1cd3087585b51cb2ce5d153f1ed85faefc7e85006aee22"} Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.731482 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-badb-account-create-update-4hf5f" podStartSLOduration=3.731460783 podStartE2EDuration="3.731460783s" podCreationTimestamp="2026-02-27 19:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:17:12.713043353 +0000 UTC m=+1932.191824503" watchObservedRunningTime="2026-02-27 19:17:12.731460783 +0000 UTC m=+1932.210241943" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.739456 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-a620-account-create-update-4fnbx" podStartSLOduration=2.739440351 podStartE2EDuration="2.739440351s" podCreationTimestamp="2026-02-27 19:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:17:12.733736311 +0000 UTC m=+1932.212517491" watchObservedRunningTime="2026-02-27 19:17:12.739440351 +0000 UTC m=+1932.218221511" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.757109 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-hpvzf" podStartSLOduration=2.757089098 podStartE2EDuration="2.757089098s" podCreationTimestamp="2026-02-27 19:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:17:12.750304655 +0000 UTC m=+1932.229085825" watchObservedRunningTime="2026-02-27 19:17:12.757089098 +0000 UTC m=+1932.235870248" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.763485 4981 scope.go:117] "RemoveContainer" containerID="1ad1a44c358ac86d4bb2f9487ee4a9d977416506a3b7f46ee16b72419080af48" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.808141 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-bbcc-account-create-update-rw9k7" podStartSLOduration=2.80811788 podStartE2EDuration="2.80811788s" podCreationTimestamp="2026-02-27 19:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:17:12.803633026 +0000 UTC m=+1932.282414186" watchObservedRunningTime="2026-02-27 19:17:12.80811788 +0000 UTC m=+1932.286899040" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.812467 4981 scope.go:117] "RemoveContainer" containerID="6f2ec3ef33889a3eebc60b5f33250f1ece331b74527f03423a47750db361ffc1" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.811926 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-5kflz" podStartSLOduration=3.811907263 podStartE2EDuration="3.811907263s" podCreationTimestamp="2026-02-27 19:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:17:12.786626199 +0000 UTC m=+1932.265407359" watchObservedRunningTime="2026-02-27 19:17:12.811907263 +0000 UTC m=+1932.290688443" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.845711 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.867610 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.896396 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.902433 4981 scope.go:117] "RemoveContainer" containerID="ce6d2237386c71fe58768d5fc41ce360d9699bde69ba7ed831cda2f824103e2f" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.913337 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.926318 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:17:12 crc kubenswrapper[4981]: E0227 19:17:12.926845 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf0ef64-e29f-4945-a2df-e1adb0477806" containerName="ceilometer-central-agent" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.926885 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf0ef64-e29f-4945-a2df-e1adb0477806" containerName="ceilometer-central-agent" Feb 27 19:17:12 crc kubenswrapper[4981]: E0227 19:17:12.926901 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ae1623-7ee3-4b18-bb82-aa177354f327" containerName="glance-log" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.926908 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ae1623-7ee3-4b18-bb82-aa177354f327" containerName="glance-log" Feb 27 19:17:12 crc kubenswrapper[4981]: E0227 19:17:12.926918 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3832dc0a-7cc6-4207-a928-7888e8efee3f" containerName="glance-httpd" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.926926 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="3832dc0a-7cc6-4207-a928-7888e8efee3f" containerName="glance-httpd" Feb 27 19:17:12 crc kubenswrapper[4981]: E0227 19:17:12.926936 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf0ef64-e29f-4945-a2df-e1adb0477806" containerName="sg-core" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.926944 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf0ef64-e29f-4945-a2df-e1adb0477806" containerName="sg-core" Feb 27 19:17:12 crc kubenswrapper[4981]: E0227 19:17:12.926968 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ae1623-7ee3-4b18-bb82-aa177354f327" containerName="glance-httpd" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.926975 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ae1623-7ee3-4b18-bb82-aa177354f327" containerName="glance-httpd" Feb 27 19:17:12 crc kubenswrapper[4981]: E0227 19:17:12.926996 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf0ef64-e29f-4945-a2df-e1adb0477806" containerName="ceilometer-notification-agent" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.927003 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf0ef64-e29f-4945-a2df-e1adb0477806" containerName="ceilometer-notification-agent" Feb 27 19:17:12 crc kubenswrapper[4981]: E0227 19:17:12.927012 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3832dc0a-7cc6-4207-a928-7888e8efee3f" containerName="glance-log" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.927021 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="3832dc0a-7cc6-4207-a928-7888e8efee3f" containerName="glance-log" Feb 27 19:17:12 crc kubenswrapper[4981]: E0227 19:17:12.927035 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf0ef64-e29f-4945-a2df-e1adb0477806" containerName="proxy-httpd" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.927043 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf0ef64-e29f-4945-a2df-e1adb0477806" containerName="proxy-httpd" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.927282 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf0ef64-e29f-4945-a2df-e1adb0477806" containerName="sg-core" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.927309 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf0ef64-e29f-4945-a2df-e1adb0477806" containerName="ceilometer-central-agent" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.927318 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="43ae1623-7ee3-4b18-bb82-aa177354f327" containerName="glance-log" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.927330 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="43ae1623-7ee3-4b18-bb82-aa177354f327" containerName="glance-httpd" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.927342 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="3832dc0a-7cc6-4207-a928-7888e8efee3f" containerName="glance-log" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.927359 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="3832dc0a-7cc6-4207-a928-7888e8efee3f" containerName="glance-httpd" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.927373 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf0ef64-e29f-4945-a2df-e1adb0477806" containerName="proxy-httpd" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.927384 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf0ef64-e29f-4945-a2df-e1adb0477806" containerName="ceilometer-notification-agent" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.938728 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.938877 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.943635 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.943914 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.963770 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 19:17:12 crc kubenswrapper[4981]: I0227 19:17:12.991792 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.025367 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.027024 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.033366 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fm8mn" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.033791 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.033950 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.034118 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.050292 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.059701 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-config-data\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.059742 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d6d3674-eb9a-4631-998a-73b14544d6d8-run-httpd\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.059764 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-scripts\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.059788 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d6d3674-eb9a-4631-998a-73b14544d6d8-log-httpd\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.059898 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbkv4\" (UniqueName: \"kubernetes.io/projected/5d6d3674-eb9a-4631-998a-73b14544d6d8-kube-api-access-zbkv4\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.059921 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.059947 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.062864 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.080376 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.082733 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.101414 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.116960 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161248 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-config-data\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161314 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161368 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbkv4\" (UniqueName: \"kubernetes.io/projected/5d6d3674-eb9a-4631-998a-73b14544d6d8-kube-api-access-zbkv4\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161395 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161425 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161448 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161467 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0491c487-f89e-4ce4-ad63-323ad7624487-logs\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161496 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnhwz\" (UniqueName: \"kubernetes.io/projected/0491c487-f89e-4ce4-ad63-323ad7624487-kube-api-access-fnhwz\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161524 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0491c487-f89e-4ce4-ad63-323ad7624487-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161550 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161578 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161615 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d6d3674-eb9a-4631-998a-73b14544d6d8-run-httpd\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161637 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161660 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-config-data\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161688 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-scripts\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161718 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d6d3674-eb9a-4631-998a-73b14544d6d8-log-httpd\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161740 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161768 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161805 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161837 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-logs\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161879 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2pzm\" (UniqueName: \"kubernetes.io/projected/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-kube-api-access-z2pzm\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161917 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-scripts\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.161946 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.164760 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d6d3674-eb9a-4631-998a-73b14544d6d8-log-httpd\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.165251 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d6d3674-eb9a-4631-998a-73b14544d6d8-run-httpd\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.184629 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.185324 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbkv4\" (UniqueName: \"kubernetes.io/projected/5d6d3674-eb9a-4631-998a-73b14544d6d8-kube-api-access-zbkv4\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.186699 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-scripts\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.187136 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.189194 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-config-data\") pod \"ceilometer-0\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.263173 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-config-data\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.263235 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.263285 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.263307 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.263323 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0491c487-f89e-4ce4-ad63-323ad7624487-logs\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.263348 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnhwz\" (UniqueName: \"kubernetes.io/projected/0491c487-f89e-4ce4-ad63-323ad7624487-kube-api-access-fnhwz\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.263365 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0491c487-f89e-4ce4-ad63-323ad7624487-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.263384 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.263411 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.263441 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.263461 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.263487 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.263512 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-logs\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.263542 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2pzm\" (UniqueName: \"kubernetes.io/projected/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-kube-api-access-z2pzm\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.263566 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-scripts\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.263590 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.263788 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.264552 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-logs\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.264842 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.266363 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.267464 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0491c487-f89e-4ce4-ad63-323ad7624487-logs\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.267584 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.267643 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0491c487-f89e-4ce4-ad63-323ad7624487-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.268113 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.268301 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-scripts\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.272791 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.273410 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.273662 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.274133 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.274513 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-config-data\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.283970 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnhwz\" (UniqueName: \"kubernetes.io/projected/0491c487-f89e-4ce4-ad63-323ad7624487-kube-api-access-fnhwz\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.286333 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2pzm\" (UniqueName: \"kubernetes.io/projected/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-kube-api-access-z2pzm\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.304366 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.314361 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.329575 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.398315 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.420177 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.643788 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3832dc0a-7cc6-4207-a928-7888e8efee3f" path="/var/lib/kubelet/pods/3832dc0a-7cc6-4207-a928-7888e8efee3f/volumes" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.647476 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43ae1623-7ee3-4b18-bb82-aa177354f327" path="/var/lib/kubelet/pods/43ae1623-7ee3-4b18-bb82-aa177354f327/volumes" Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.648572 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbf0ef64-e29f-4945-a2df-e1adb0477806" path="/var/lib/kubelet/pods/cbf0ef64-e29f-4945-a2df-e1adb0477806/volumes" Feb 27 19:17:13 crc kubenswrapper[4981]: W0227 19:17:13.884283 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d6d3674_eb9a_4631_998a_73b14544d6d8.slice/crio-f1397479d9da71e8b305d7b9a79f92bcce2dd04a7850c6e2be9abe0c5072592c WatchSource:0}: Error finding container f1397479d9da71e8b305d7b9a79f92bcce2dd04a7850c6e2be9abe0c5072592c: Status 404 returned error can't find the container with id f1397479d9da71e8b305d7b9a79f92bcce2dd04a7850c6e2be9abe0c5072592c Feb 27 19:17:13 crc kubenswrapper[4981]: I0227 19:17:13.884824 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:17:14 crc kubenswrapper[4981]: I0227 19:17:14.072291 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 19:17:14 crc kubenswrapper[4981]: W0227 19:17:14.103443 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0491c487_f89e_4ce4_ad63_323ad7624487.slice/crio-d8deb19685d07865d0e26af1912b05d45c45e7b836f6e171faecb6e3bd0d23f3 WatchSource:0}: Error finding container d8deb19685d07865d0e26af1912b05d45c45e7b836f6e171faecb6e3bd0d23f3: Status 404 returned error can't find the container with id d8deb19685d07865d0e26af1912b05d45c45e7b836f6e171faecb6e3bd0d23f3 Feb 27 19:17:14 crc kubenswrapper[4981]: I0227 19:17:14.668602 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 19:17:14 crc kubenswrapper[4981]: I0227 19:17:14.758905 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0491c487-f89e-4ce4-ad63-323ad7624487","Type":"ContainerStarted","Data":"d8deb19685d07865d0e26af1912b05d45c45e7b836f6e171faecb6e3bd0d23f3"} Feb 27 19:17:14 crc kubenswrapper[4981]: I0227 19:17:14.774223 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285","Type":"ContainerStarted","Data":"ca40e5e300e579035592e5f61ba56d1c13272badf8162c9e4acf2f73e36e387f"} Feb 27 19:17:14 crc kubenswrapper[4981]: I0227 19:17:14.781576 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4177f13-d8db-4bc0-b8d0-ab10ad41141a","Type":"ContainerStarted","Data":"2036f916a3f9d0c2e8aa09963d8dba9504a8bcd17c6e731fdac399639c3dda55"} Feb 27 19:17:14 crc kubenswrapper[4981]: I0227 19:17:14.791103 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d6d3674-eb9a-4631-998a-73b14544d6d8","Type":"ContainerStarted","Data":"f1397479d9da71e8b305d7b9a79f92bcce2dd04a7850c6e2be9abe0c5072592c"} Feb 27 19:17:14 crc kubenswrapper[4981]: I0227 19:17:14.792724 4981 generic.go:334] "Generic (PLEG): container finished" podID="99c593ba-9134-4372-8392-6903d47aba28" containerID="25fda4f10dfd0f44cf1cd3087585b51cb2ce5d153f1ed85faefc7e85006aee22" exitCode=0 Feb 27 19:17:14 crc kubenswrapper[4981]: I0227 19:17:14.792755 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hpvzf" event={"ID":"99c593ba-9134-4372-8392-6903d47aba28","Type":"ContainerDied","Data":"25fda4f10dfd0f44cf1cd3087585b51cb2ce5d153f1ed85faefc7e85006aee22"} Feb 27 19:17:14 crc kubenswrapper[4981]: I0227 19:17:14.810095 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.810040614 podStartE2EDuration="5.810040614s" podCreationTimestamp="2026-02-27 19:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:17:14.795953204 +0000 UTC m=+1934.274734364" watchObservedRunningTime="2026-02-27 19:17:14.810040614 +0000 UTC m=+1934.288821774" Feb 27 19:17:15 crc kubenswrapper[4981]: I0227 19:17:15.004135 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 27 19:17:15 crc kubenswrapper[4981]: I0227 19:17:15.334261 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:15 crc kubenswrapper[4981]: I0227 19:17:15.448963 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bf4c8dd6c-shmsm"] Feb 27 19:17:15 crc kubenswrapper[4981]: I0227 19:17:15.454905 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" podUID="2d4e8b93-9791-4889-8e5f-5a3938235441" containerName="dnsmasq-dns" containerID="cri-o://10e9733bd1a84beb0e9d3bdac8211f223dcef3ed3d8833b09543cc31cc3e56eb" gracePeriod=10 Feb 27 19:17:15 crc kubenswrapper[4981]: I0227 19:17:15.803633 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0491c487-f89e-4ce4-ad63-323ad7624487","Type":"ContainerStarted","Data":"5e181015ebaa0f3e4a87d5a2fd2fa0cec47474fd6f1f3e6e3ce25f06cffdcb82"} Feb 27 19:17:15 crc kubenswrapper[4981]: I0227 19:17:15.805126 4981 generic.go:334] "Generic (PLEG): container finished" podID="ed0c2fbd-f556-4dba-a374-4f212f96210a" containerID="ac77ba0d4f1fbd810f57dffbc4656f2f271323b9ae46ef49af6af868539f9fb8" exitCode=0 Feb 27 19:17:15 crc kubenswrapper[4981]: I0227 19:17:15.805162 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-nw57q" event={"ID":"ed0c2fbd-f556-4dba-a374-4f212f96210a","Type":"ContainerDied","Data":"ac77ba0d4f1fbd810f57dffbc4656f2f271323b9ae46ef49af6af868539f9fb8"} Feb 27 19:17:16 crc kubenswrapper[4981]: I0227 19:17:16.208673 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hpvzf" Feb 27 19:17:16 crc kubenswrapper[4981]: I0227 19:17:16.243005 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99c593ba-9134-4372-8392-6903d47aba28-operator-scripts\") pod \"99c593ba-9134-4372-8392-6903d47aba28\" (UID: \"99c593ba-9134-4372-8392-6903d47aba28\") " Feb 27 19:17:16 crc kubenswrapper[4981]: I0227 19:17:16.244129 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99c593ba-9134-4372-8392-6903d47aba28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "99c593ba-9134-4372-8392-6903d47aba28" (UID: "99c593ba-9134-4372-8392-6903d47aba28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:17:16 crc kubenswrapper[4981]: I0227 19:17:16.244367 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpd6q\" (UniqueName: \"kubernetes.io/projected/99c593ba-9134-4372-8392-6903d47aba28-kube-api-access-jpd6q\") pod \"99c593ba-9134-4372-8392-6903d47aba28\" (UID: \"99c593ba-9134-4372-8392-6903d47aba28\") " Feb 27 19:17:16 crc kubenswrapper[4981]: I0227 19:17:16.246239 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99c593ba-9134-4372-8392-6903d47aba28-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:16 crc kubenswrapper[4981]: I0227 19:17:16.253290 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99c593ba-9134-4372-8392-6903d47aba28-kube-api-access-jpd6q" (OuterVolumeSpecName: "kube-api-access-jpd6q") pod "99c593ba-9134-4372-8392-6903d47aba28" (UID: "99c593ba-9134-4372-8392-6903d47aba28"). InnerVolumeSpecName "kube-api-access-jpd6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:17:16 crc kubenswrapper[4981]: I0227 19:17:16.309759 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kp7th" Feb 27 19:17:16 crc kubenswrapper[4981]: I0227 19:17:16.347595 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpd6q\" (UniqueName: \"kubernetes.io/projected/99c593ba-9134-4372-8392-6903d47aba28-kube-api-access-jpd6q\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:16 crc kubenswrapper[4981]: I0227 19:17:16.366374 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kp7th" Feb 27 19:17:16 crc kubenswrapper[4981]: I0227 19:17:16.817237 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hpvzf" Feb 27 19:17:16 crc kubenswrapper[4981]: I0227 19:17:16.817300 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hpvzf" event={"ID":"99c593ba-9134-4372-8392-6903d47aba28","Type":"ContainerDied","Data":"d83f9aa34dc1820bd2259be94f8526a6add249af3cfc7c8c0b128a80caa2cf27"} Feb 27 19:17:16 crc kubenswrapper[4981]: I0227 19:17:16.817357 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d83f9aa34dc1820bd2259be94f8526a6add249af3cfc7c8c0b128a80caa2cf27" Feb 27 19:17:16 crc kubenswrapper[4981]: I0227 19:17:16.819544 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4177f13-d8db-4bc0-b8d0-ab10ad41141a","Type":"ContainerStarted","Data":"ea037a2a3049ae2cd6efb3f77f7b56bfc162c209d7183395934e2ad212d932e0"} Feb 27 19:17:16 crc kubenswrapper[4981]: I0227 19:17:16.830452 4981 generic.go:334] "Generic (PLEG): container finished" podID="2d4e8b93-9791-4889-8e5f-5a3938235441" containerID="10e9733bd1a84beb0e9d3bdac8211f223dcef3ed3d8833b09543cc31cc3e56eb" exitCode=0 Feb 27 19:17:16 crc kubenswrapper[4981]: I0227 19:17:16.830566 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" event={"ID":"2d4e8b93-9791-4889-8e5f-5a3938235441","Type":"ContainerDied","Data":"10e9733bd1a84beb0e9d3bdac8211f223dcef3ed3d8833b09543cc31cc3e56eb"} Feb 27 19:17:17 crc kubenswrapper[4981]: I0227 19:17:17.081116 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kp7th"] Feb 27 19:17:17 crc kubenswrapper[4981]: I0227 19:17:17.282206 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-nw57q" Feb 27 19:17:17 crc kubenswrapper[4981]: I0227 19:17:17.364811 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed0c2fbd-f556-4dba-a374-4f212f96210a-operator-scripts\") pod \"ed0c2fbd-f556-4dba-a374-4f212f96210a\" (UID: \"ed0c2fbd-f556-4dba-a374-4f212f96210a\") " Feb 27 19:17:17 crc kubenswrapper[4981]: I0227 19:17:17.365011 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g76qc\" (UniqueName: \"kubernetes.io/projected/ed0c2fbd-f556-4dba-a374-4f212f96210a-kube-api-access-g76qc\") pod \"ed0c2fbd-f556-4dba-a374-4f212f96210a\" (UID: \"ed0c2fbd-f556-4dba-a374-4f212f96210a\") " Feb 27 19:17:17 crc kubenswrapper[4981]: I0227 19:17:17.366951 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed0c2fbd-f556-4dba-a374-4f212f96210a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed0c2fbd-f556-4dba-a374-4f212f96210a" (UID: "ed0c2fbd-f556-4dba-a374-4f212f96210a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:17:17 crc kubenswrapper[4981]: I0227 19:17:17.380951 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed0c2fbd-f556-4dba-a374-4f212f96210a-kube-api-access-g76qc" (OuterVolumeSpecName: "kube-api-access-g76qc") pod "ed0c2fbd-f556-4dba-a374-4f212f96210a" (UID: "ed0c2fbd-f556-4dba-a374-4f212f96210a"). InnerVolumeSpecName "kube-api-access-g76qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:17:17 crc kubenswrapper[4981]: I0227 19:17:17.467620 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed0c2fbd-f556-4dba-a374-4f212f96210a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:17 crc kubenswrapper[4981]: I0227 19:17:17.468180 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g76qc\" (UniqueName: \"kubernetes.io/projected/ed0c2fbd-f556-4dba-a374-4f212f96210a-kube-api-access-g76qc\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:17 crc kubenswrapper[4981]: I0227 19:17:17.879514 4981 generic.go:334] "Generic (PLEG): container finished" podID="eac1eaf7-6fea-4dae-b8f3-b81615d30ee0" containerID="85db6b13408ac6d2fc34958e4e235baace7ea42c2b7f4693b9c8c037070214e8" exitCode=0 Feb 27 19:17:17 crc kubenswrapper[4981]: I0227 19:17:17.879617 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a620-account-create-update-4fnbx" event={"ID":"eac1eaf7-6fea-4dae-b8f3-b81615d30ee0","Type":"ContainerDied","Data":"85db6b13408ac6d2fc34958e4e235baace7ea42c2b7f4693b9c8c037070214e8"} Feb 27 19:17:17 crc kubenswrapper[4981]: I0227 19:17:17.887829 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-nw57q" event={"ID":"ed0c2fbd-f556-4dba-a374-4f212f96210a","Type":"ContainerDied","Data":"3ca06c4efc04d1e7ea906a1c6734c7e41a265691c91cddb71d9977eea570a0b3"} Feb 27 19:17:17 crc kubenswrapper[4981]: I0227 19:17:17.888133 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ca06c4efc04d1e7ea906a1c6734c7e41a265691c91cddb71d9977eea570a0b3" Feb 27 19:17:17 crc kubenswrapper[4981]: I0227 19:17:17.888254 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-nw57q" Feb 27 19:17:17 crc kubenswrapper[4981]: I0227 19:17:17.890261 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" event={"ID":"2d4e8b93-9791-4889-8e5f-5a3938235441","Type":"ContainerDied","Data":"e177c1f54320132adf2f25e3537b4d766d0dc4e2c054c2bce5b8b4aea7789a37"} Feb 27 19:17:17 crc kubenswrapper[4981]: I0227 19:17:17.890307 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e177c1f54320132adf2f25e3537b4d766d0dc4e2c054c2bce5b8b4aea7789a37" Feb 27 19:17:17 crc kubenswrapper[4981]: I0227 19:17:17.893483 4981 generic.go:334] "Generic (PLEG): container finished" podID="19242b3f-f738-49a0-be1b-578a62ec5f22" containerID="9bf87d74da42f41aead6e2511ae77fd0b199c2f99ec7fddc34b2d5c03a436bfa" exitCode=0 Feb 27 19:17:17 crc kubenswrapper[4981]: I0227 19:17:17.893623 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5kflz" event={"ID":"19242b3f-f738-49a0-be1b-578a62ec5f22","Type":"ContainerDied","Data":"9bf87d74da42f41aead6e2511ae77fd0b199c2f99ec7fddc34b2d5c03a436bfa"} Feb 27 19:17:17 crc kubenswrapper[4981]: I0227 19:17:17.902391 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0491c487-f89e-4ce4-ad63-323ad7624487","Type":"ContainerStarted","Data":"cac0b92f8247c85d31a507b6ae253c037102dd71220d47042a591b3da9346b43"} Feb 27 19:17:17 crc kubenswrapper[4981]: I0227 19:17:17.902576 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kp7th" podUID="ecdc9fdc-fd28-4bdc-8403-934067c2ec3d" containerName="registry-server" containerID="cri-o://d889a6b00da43b08c5a203968124cd83a2f3b4226be1024b9bb165ee13e5913d" gracePeriod=2 Feb 27 19:17:17 crc kubenswrapper[4981]: I0227 19:17:17.943282 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.94326156 podStartE2EDuration="5.94326156s" podCreationTimestamp="2026-02-27 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:17:17.938389304 +0000 UTC m=+1937.417170474" watchObservedRunningTime="2026-02-27 19:17:17.94326156 +0000 UTC m=+1937.422042720" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.122001 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.186828 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-ovsdbserver-nb\") pod \"2d4e8b93-9791-4889-8e5f-5a3938235441\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.187251 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-ovsdbserver-sb\") pod \"2d4e8b93-9791-4889-8e5f-5a3938235441\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.187295 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ldxm\" (UniqueName: \"kubernetes.io/projected/2d4e8b93-9791-4889-8e5f-5a3938235441-kube-api-access-6ldxm\") pod \"2d4e8b93-9791-4889-8e5f-5a3938235441\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.187324 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-dns-swift-storage-0\") pod \"2d4e8b93-9791-4889-8e5f-5a3938235441\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.187346 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-config\") pod \"2d4e8b93-9791-4889-8e5f-5a3938235441\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.187377 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-dns-svc\") pod \"2d4e8b93-9791-4889-8e5f-5a3938235441\" (UID: \"2d4e8b93-9791-4889-8e5f-5a3938235441\") " Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.229434 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d4e8b93-9791-4889-8e5f-5a3938235441-kube-api-access-6ldxm" (OuterVolumeSpecName: "kube-api-access-6ldxm") pod "2d4e8b93-9791-4889-8e5f-5a3938235441" (UID: "2d4e8b93-9791-4889-8e5f-5a3938235441"). InnerVolumeSpecName "kube-api-access-6ldxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.289426 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ldxm\" (UniqueName: \"kubernetes.io/projected/2d4e8b93-9791-4889-8e5f-5a3938235441-kube-api-access-6ldxm\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.313983 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-config" (OuterVolumeSpecName: "config") pod "2d4e8b93-9791-4889-8e5f-5a3938235441" (UID: "2d4e8b93-9791-4889-8e5f-5a3938235441"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.315774 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2d4e8b93-9791-4889-8e5f-5a3938235441" (UID: "2d4e8b93-9791-4889-8e5f-5a3938235441"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.320548 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2d4e8b93-9791-4889-8e5f-5a3938235441" (UID: "2d4e8b93-9791-4889-8e5f-5a3938235441"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.321065 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2d4e8b93-9791-4889-8e5f-5a3938235441" (UID: "2d4e8b93-9791-4889-8e5f-5a3938235441"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.327854 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2d4e8b93-9791-4889-8e5f-5a3938235441" (UID: "2d4e8b93-9791-4889-8e5f-5a3938235441"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.330947 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kp7th" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.391032 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecdc9fdc-fd28-4bdc-8403-934067c2ec3d-utilities\") pod \"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d\" (UID: \"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d\") " Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.391267 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecdc9fdc-fd28-4bdc-8403-934067c2ec3d-catalog-content\") pod \"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d\" (UID: \"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d\") " Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.391377 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhnxf\" (UniqueName: \"kubernetes.io/projected/ecdc9fdc-fd28-4bdc-8403-934067c2ec3d-kube-api-access-dhnxf\") pod \"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d\" (UID: \"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d\") " Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.391882 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.391906 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.391919 4981 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.391936 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.391947 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d4e8b93-9791-4889-8e5f-5a3938235441-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.396268 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecdc9fdc-fd28-4bdc-8403-934067c2ec3d-utilities" (OuterVolumeSpecName: "utilities") pod "ecdc9fdc-fd28-4bdc-8403-934067c2ec3d" (UID: "ecdc9fdc-fd28-4bdc-8403-934067c2ec3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.398880 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecdc9fdc-fd28-4bdc-8403-934067c2ec3d-kube-api-access-dhnxf" (OuterVolumeSpecName: "kube-api-access-dhnxf") pod "ecdc9fdc-fd28-4bdc-8403-934067c2ec3d" (UID: "ecdc9fdc-fd28-4bdc-8403-934067c2ec3d"). InnerVolumeSpecName "kube-api-access-dhnxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.459220 4981 scope.go:117] "RemoveContainer" containerID="46f07e59ffd023160b5b0f28c57bbb05710a1032f129bd2a26938cf25a90cd4e" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.496162 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhnxf\" (UniqueName: \"kubernetes.io/projected/ecdc9fdc-fd28-4bdc-8403-934067c2ec3d-kube-api-access-dhnxf\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.496190 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecdc9fdc-fd28-4bdc-8403-934067c2ec3d-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.523499 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecdc9fdc-fd28-4bdc-8403-934067c2ec3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecdc9fdc-fd28-4bdc-8403-934067c2ec3d" (UID: "ecdc9fdc-fd28-4bdc-8403-934067c2ec3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.599228 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecdc9fdc-fd28-4bdc-8403-934067c2ec3d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.628482 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:17:18 crc kubenswrapper[4981]: E0227 19:17:18.628842 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.922721 4981 generic.go:334] "Generic (PLEG): container finished" podID="eccdd187-3938-4331-82f9-b5dac2e9c1c1" containerID="0107fd337927931131bec521f05b370a528d6b80221a1a8d41f45e98551f9de5" exitCode=0 Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.922788 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-badb-account-create-update-4hf5f" event={"ID":"eccdd187-3938-4331-82f9-b5dac2e9c1c1","Type":"ContainerDied","Data":"0107fd337927931131bec521f05b370a528d6b80221a1a8d41f45e98551f9de5"} Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.924614 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d6d3674-eb9a-4631-998a-73b14544d6d8","Type":"ContainerStarted","Data":"625319f63b3d20f0554f150a1b2e887bb9d72173a0ada06f323ca047d1e9d077"} Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.926117 4981 generic.go:334] "Generic (PLEG): container finished" podID="2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32" containerID="d457a0124f7f00549b4901eaf6afb0ec7c6f599305643ac259338e9d66f9806a" exitCode=0 Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.926136 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bbcc-account-create-update-rw9k7" event={"ID":"2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32","Type":"ContainerDied","Data":"d457a0124f7f00549b4901eaf6afb0ec7c6f599305643ac259338e9d66f9806a"} Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.928603 4981 generic.go:334] "Generic (PLEG): container finished" podID="ecdc9fdc-fd28-4bdc-8403-934067c2ec3d" containerID="d889a6b00da43b08c5a203968124cd83a2f3b4226be1024b9bb165ee13e5913d" exitCode=0 Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.928661 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kp7th" event={"ID":"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d","Type":"ContainerDied","Data":"d889a6b00da43b08c5a203968124cd83a2f3b4226be1024b9bb165ee13e5913d"} Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.928681 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kp7th" event={"ID":"ecdc9fdc-fd28-4bdc-8403-934067c2ec3d","Type":"ContainerDied","Data":"3c7a5588a97324e36beb21d73c9b4604d54cf9d2423fb6a7565b2023d8fd06d9"} Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.928684 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kp7th" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.928728 4981 scope.go:117] "RemoveContainer" containerID="d889a6b00da43b08c5a203968124cd83a2f3b4226be1024b9bb165ee13e5913d" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.932700 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4177f13-d8db-4bc0-b8d0-ab10ad41141a","Type":"ContainerStarted","Data":"104ad2208f27454f8e4b776bd22c70b0029b5027f407230759c896d3d4ea0beb"} Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.932918 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bf4c8dd6c-shmsm" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.972994 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.972968229 podStartE2EDuration="6.972968229s" podCreationTimestamp="2026-02-27 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:17:18.962330372 +0000 UTC m=+1938.441111532" watchObservedRunningTime="2026-02-27 19:17:18.972968229 +0000 UTC m=+1938.451749389" Feb 27 19:17:18 crc kubenswrapper[4981]: I0227 19:17:18.979400 4981 scope.go:117] "RemoveContainer" containerID="b1f584f37e5aeb76a075432824101373c92311609d3c6df5917789ce20af5508" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.043220 4981 scope.go:117] "RemoveContainer" containerID="f64edeb57065e6b527af77dd076ea91e224b207dfe6bca0498cefbf6ce296f8e" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.043644 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kp7th"] Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.053284 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kp7th"] Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.060329 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bf4c8dd6c-shmsm"] Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.069826 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bf4c8dd6c-shmsm"] Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.170136 4981 scope.go:117] "RemoveContainer" containerID="d889a6b00da43b08c5a203968124cd83a2f3b4226be1024b9bb165ee13e5913d" Feb 27 19:17:19 crc kubenswrapper[4981]: E0227 19:17:19.170684 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d889a6b00da43b08c5a203968124cd83a2f3b4226be1024b9bb165ee13e5913d\": container with ID starting with d889a6b00da43b08c5a203968124cd83a2f3b4226be1024b9bb165ee13e5913d not found: ID does not exist" containerID="d889a6b00da43b08c5a203968124cd83a2f3b4226be1024b9bb165ee13e5913d" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.170738 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d889a6b00da43b08c5a203968124cd83a2f3b4226be1024b9bb165ee13e5913d"} err="failed to get container status \"d889a6b00da43b08c5a203968124cd83a2f3b4226be1024b9bb165ee13e5913d\": rpc error: code = NotFound desc = could not find container \"d889a6b00da43b08c5a203968124cd83a2f3b4226be1024b9bb165ee13e5913d\": container with ID starting with d889a6b00da43b08c5a203968124cd83a2f3b4226be1024b9bb165ee13e5913d not found: ID does not exist" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.170776 4981 scope.go:117] "RemoveContainer" containerID="b1f584f37e5aeb76a075432824101373c92311609d3c6df5917789ce20af5508" Feb 27 19:17:19 crc kubenswrapper[4981]: E0227 19:17:19.171171 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1f584f37e5aeb76a075432824101373c92311609d3c6df5917789ce20af5508\": container with ID starting with b1f584f37e5aeb76a075432824101373c92311609d3c6df5917789ce20af5508 not found: ID does not exist" containerID="b1f584f37e5aeb76a075432824101373c92311609d3c6df5917789ce20af5508" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.171227 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1f584f37e5aeb76a075432824101373c92311609d3c6df5917789ce20af5508"} err="failed to get container status \"b1f584f37e5aeb76a075432824101373c92311609d3c6df5917789ce20af5508\": rpc error: code = NotFound desc = could not find container \"b1f584f37e5aeb76a075432824101373c92311609d3c6df5917789ce20af5508\": container with ID starting with b1f584f37e5aeb76a075432824101373c92311609d3c6df5917789ce20af5508 not found: ID does not exist" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.171260 4981 scope.go:117] "RemoveContainer" containerID="f64edeb57065e6b527af77dd076ea91e224b207dfe6bca0498cefbf6ce296f8e" Feb 27 19:17:19 crc kubenswrapper[4981]: E0227 19:17:19.171558 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f64edeb57065e6b527af77dd076ea91e224b207dfe6bca0498cefbf6ce296f8e\": container with ID starting with f64edeb57065e6b527af77dd076ea91e224b207dfe6bca0498cefbf6ce296f8e not found: ID does not exist" containerID="f64edeb57065e6b527af77dd076ea91e224b207dfe6bca0498cefbf6ce296f8e" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.171590 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f64edeb57065e6b527af77dd076ea91e224b207dfe6bca0498cefbf6ce296f8e"} err="failed to get container status \"f64edeb57065e6b527af77dd076ea91e224b207dfe6bca0498cefbf6ce296f8e\": rpc error: code = NotFound desc = could not find container \"f64edeb57065e6b527af77dd076ea91e224b207dfe6bca0498cefbf6ce296f8e\": container with ID starting with f64edeb57065e6b527af77dd076ea91e224b207dfe6bca0498cefbf6ce296f8e not found: ID does not exist" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.361487 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a620-account-create-update-4fnbx" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.401652 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5kflz" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.518482 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eac1eaf7-6fea-4dae-b8f3-b81615d30ee0-operator-scripts\") pod \"eac1eaf7-6fea-4dae-b8f3-b81615d30ee0\" (UID: \"eac1eaf7-6fea-4dae-b8f3-b81615d30ee0\") " Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.519156 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eac1eaf7-6fea-4dae-b8f3-b81615d30ee0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eac1eaf7-6fea-4dae-b8f3-b81615d30ee0" (UID: "eac1eaf7-6fea-4dae-b8f3-b81615d30ee0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.519295 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-996h5\" (UniqueName: \"kubernetes.io/projected/19242b3f-f738-49a0-be1b-578a62ec5f22-kube-api-access-996h5\") pod \"19242b3f-f738-49a0-be1b-578a62ec5f22\" (UID: \"19242b3f-f738-49a0-be1b-578a62ec5f22\") " Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.519344 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19242b3f-f738-49a0-be1b-578a62ec5f22-operator-scripts\") pod \"19242b3f-f738-49a0-be1b-578a62ec5f22\" (UID: \"19242b3f-f738-49a0-be1b-578a62ec5f22\") " Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.519381 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps47h\" (UniqueName: \"kubernetes.io/projected/eac1eaf7-6fea-4dae-b8f3-b81615d30ee0-kube-api-access-ps47h\") pod \"eac1eaf7-6fea-4dae-b8f3-b81615d30ee0\" (UID: \"eac1eaf7-6fea-4dae-b8f3-b81615d30ee0\") " Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.519650 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eac1eaf7-6fea-4dae-b8f3-b81615d30ee0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.520248 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19242b3f-f738-49a0-be1b-578a62ec5f22-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19242b3f-f738-49a0-be1b-578a62ec5f22" (UID: "19242b3f-f738-49a0-be1b-578a62ec5f22"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.524384 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19242b3f-f738-49a0-be1b-578a62ec5f22-kube-api-access-996h5" (OuterVolumeSpecName: "kube-api-access-996h5") pod "19242b3f-f738-49a0-be1b-578a62ec5f22" (UID: "19242b3f-f738-49a0-be1b-578a62ec5f22"). InnerVolumeSpecName "kube-api-access-996h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.524426 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eac1eaf7-6fea-4dae-b8f3-b81615d30ee0-kube-api-access-ps47h" (OuterVolumeSpecName: "kube-api-access-ps47h") pod "eac1eaf7-6fea-4dae-b8f3-b81615d30ee0" (UID: "eac1eaf7-6fea-4dae-b8f3-b81615d30ee0"). InnerVolumeSpecName "kube-api-access-ps47h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.620494 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps47h\" (UniqueName: \"kubernetes.io/projected/eac1eaf7-6fea-4dae-b8f3-b81615d30ee0-kube-api-access-ps47h\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.620532 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-996h5\" (UniqueName: \"kubernetes.io/projected/19242b3f-f738-49a0-be1b-578a62ec5f22-kube-api-access-996h5\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.620543 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19242b3f-f738-49a0-be1b-578a62ec5f22-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.640106 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d4e8b93-9791-4889-8e5f-5a3938235441" path="/var/lib/kubelet/pods/2d4e8b93-9791-4889-8e5f-5a3938235441/volumes" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.640692 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecdc9fdc-fd28-4bdc-8403-934067c2ec3d" path="/var/lib/kubelet/pods/ecdc9fdc-fd28-4bdc-8403-934067c2ec3d/volumes" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.998439 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5kflz" Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.999440 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5kflz" event={"ID":"19242b3f-f738-49a0-be1b-578a62ec5f22","Type":"ContainerDied","Data":"8e6cd79bec3f9b61ee2b8fa458cbd7f9249cb5718c3b9e0ce423aa94fb8c5e9e"} Feb 27 19:17:19 crc kubenswrapper[4981]: I0227 19:17:19.999498 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e6cd79bec3f9b61ee2b8fa458cbd7f9249cb5718c3b9e0ce423aa94fb8c5e9e" Feb 27 19:17:20 crc kubenswrapper[4981]: I0227 19:17:20.002314 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a620-account-create-update-4fnbx" Feb 27 19:17:20 crc kubenswrapper[4981]: I0227 19:17:20.002899 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a620-account-create-update-4fnbx" event={"ID":"eac1eaf7-6fea-4dae-b8f3-b81615d30ee0","Type":"ContainerDied","Data":"f27bb6bb541cbc25d24487063a1e500ae58a09a7449db1ba1abe9f11d5a8e768"} Feb 27 19:17:20 crc kubenswrapper[4981]: I0227 19:17:20.002928 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f27bb6bb541cbc25d24487063a1e500ae58a09a7449db1ba1abe9f11d5a8e768" Feb 27 19:17:20 crc kubenswrapper[4981]: I0227 19:17:20.008706 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d6d3674-eb9a-4631-998a-73b14544d6d8","Type":"ContainerStarted","Data":"641fc7ebec38cf90e79555e2636b8a9f114beaaf5c095283d1a356214e3b075c"} Feb 27 19:17:20 crc kubenswrapper[4981]: I0227 19:17:20.256301 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 27 19:17:20 crc kubenswrapper[4981]: I0227 19:17:20.431600 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bbcc-account-create-update-rw9k7" Feb 27 19:17:20 crc kubenswrapper[4981]: I0227 19:17:20.442033 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-badb-account-create-update-4hf5f" Feb 27 19:17:20 crc kubenswrapper[4981]: I0227 19:17:20.550854 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32-operator-scripts\") pod \"2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32\" (UID: \"2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32\") " Feb 27 19:17:20 crc kubenswrapper[4981]: I0227 19:17:20.551342 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eccdd187-3938-4331-82f9-b5dac2e9c1c1-operator-scripts\") pod \"eccdd187-3938-4331-82f9-b5dac2e9c1c1\" (UID: \"eccdd187-3938-4331-82f9-b5dac2e9c1c1\") " Feb 27 19:17:20 crc kubenswrapper[4981]: I0227 19:17:20.551487 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzfmc\" (UniqueName: \"kubernetes.io/projected/2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32-kube-api-access-gzfmc\") pod \"2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32\" (UID: \"2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32\") " Feb 27 19:17:20 crc kubenswrapper[4981]: I0227 19:17:20.551561 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlp42\" (UniqueName: \"kubernetes.io/projected/eccdd187-3938-4331-82f9-b5dac2e9c1c1-kube-api-access-dlp42\") pod \"eccdd187-3938-4331-82f9-b5dac2e9c1c1\" (UID: \"eccdd187-3938-4331-82f9-b5dac2e9c1c1\") " Feb 27 19:17:20 crc kubenswrapper[4981]: I0227 19:17:20.552491 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eccdd187-3938-4331-82f9-b5dac2e9c1c1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eccdd187-3938-4331-82f9-b5dac2e9c1c1" (UID: "eccdd187-3938-4331-82f9-b5dac2e9c1c1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:17:20 crc kubenswrapper[4981]: I0227 19:17:20.553479 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32" (UID: "2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:17:20 crc kubenswrapper[4981]: I0227 19:17:20.557820 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32-kube-api-access-gzfmc" (OuterVolumeSpecName: "kube-api-access-gzfmc") pod "2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32" (UID: "2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32"). InnerVolumeSpecName "kube-api-access-gzfmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:17:20 crc kubenswrapper[4981]: I0227 19:17:20.558084 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eccdd187-3938-4331-82f9-b5dac2e9c1c1-kube-api-access-dlp42" (OuterVolumeSpecName: "kube-api-access-dlp42") pod "eccdd187-3938-4331-82f9-b5dac2e9c1c1" (UID: "eccdd187-3938-4331-82f9-b5dac2e9c1c1"). InnerVolumeSpecName "kube-api-access-dlp42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:17:20 crc kubenswrapper[4981]: I0227 19:17:20.653423 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzfmc\" (UniqueName: \"kubernetes.io/projected/2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32-kube-api-access-gzfmc\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:20 crc kubenswrapper[4981]: I0227 19:17:20.653450 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlp42\" (UniqueName: \"kubernetes.io/projected/eccdd187-3938-4331-82f9-b5dac2e9c1c1-kube-api-access-dlp42\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:20 crc kubenswrapper[4981]: I0227 19:17:20.653462 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:20 crc kubenswrapper[4981]: I0227 19:17:20.653473 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eccdd187-3938-4331-82f9-b5dac2e9c1c1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:21 crc kubenswrapper[4981]: I0227 19:17:21.019524 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-badb-account-create-update-4hf5f" Feb 27 19:17:21 crc kubenswrapper[4981]: I0227 19:17:21.020454 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-badb-account-create-update-4hf5f" event={"ID":"eccdd187-3938-4331-82f9-b5dac2e9c1c1","Type":"ContainerDied","Data":"3fa08cae0d19eff44fe049538c47338dc7fcb2d0cf205553c7cc9f98b137f3d0"} Feb 27 19:17:21 crc kubenswrapper[4981]: I0227 19:17:21.020497 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fa08cae0d19eff44fe049538c47338dc7fcb2d0cf205553c7cc9f98b137f3d0" Feb 27 19:17:21 crc kubenswrapper[4981]: I0227 19:17:21.022891 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d6d3674-eb9a-4631-998a-73b14544d6d8","Type":"ContainerStarted","Data":"154021a6ea5eb18406c23a7c17cb55e340445beff1d1a6b1d4b62e023d46890d"} Feb 27 19:17:21 crc kubenswrapper[4981]: I0227 19:17:21.024399 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bbcc-account-create-update-rw9k7" event={"ID":"2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32","Type":"ContainerDied","Data":"39ff05dc031af9e09325c4e079bd5c68741f0d7647784d7d73b61621a09de604"} Feb 27 19:17:21 crc kubenswrapper[4981]: I0227 19:17:21.024420 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39ff05dc031af9e09325c4e079bd5c68741f0d7647784d7d73b61621a09de604" Feb 27 19:17:21 crc kubenswrapper[4981]: I0227 19:17:21.024469 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bbcc-account-create-update-rw9k7" Feb 27 19:17:23 crc kubenswrapper[4981]: I0227 19:17:23.044760 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d6d3674-eb9a-4631-998a-73b14544d6d8","Type":"ContainerStarted","Data":"4447f82a59c10c570adba5ae0b788f7ef303757c8f74ccd148d1529101b045c5"} Feb 27 19:17:23 crc kubenswrapper[4981]: I0227 19:17:23.045625 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 19:17:23 crc kubenswrapper[4981]: I0227 19:17:23.079198 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.819448976 podStartE2EDuration="11.079172456s" podCreationTimestamp="2026-02-27 19:17:12 +0000 UTC" firstStartedPulling="2026-02-27 19:17:13.889901464 +0000 UTC m=+1933.368682624" lastFinishedPulling="2026-02-27 19:17:22.149624954 +0000 UTC m=+1941.628406104" observedRunningTime="2026-02-27 19:17:23.06625897 +0000 UTC m=+1942.545040140" watchObservedRunningTime="2026-02-27 19:17:23.079172456 +0000 UTC m=+1942.557953616" Feb 27 19:17:23 crc kubenswrapper[4981]: I0227 19:17:23.400351 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 19:17:23 crc kubenswrapper[4981]: I0227 19:17:23.400420 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 19:17:23 crc kubenswrapper[4981]: I0227 19:17:23.421299 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:23 crc kubenswrapper[4981]: I0227 19:17:23.421361 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:23 crc kubenswrapper[4981]: I0227 19:17:23.448346 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 19:17:23 crc kubenswrapper[4981]: I0227 19:17:23.457150 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:23 crc kubenswrapper[4981]: I0227 19:17:23.476412 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:23 crc kubenswrapper[4981]: I0227 19:17:23.478971 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 19:17:24 crc kubenswrapper[4981]: I0227 19:17:24.057131 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 19:17:24 crc kubenswrapper[4981]: I0227 19:17:24.057175 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:24 crc kubenswrapper[4981]: I0227 19:17:24.057188 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 19:17:24 crc kubenswrapper[4981]: I0227 19:17:24.057199 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.551994 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-clm2b"] Feb 27 19:17:25 crc kubenswrapper[4981]: E0227 19:17:25.552865 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c593ba-9134-4372-8392-6903d47aba28" containerName="mariadb-database-create" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.552887 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c593ba-9134-4372-8392-6903d47aba28" containerName="mariadb-database-create" Feb 27 19:17:25 crc kubenswrapper[4981]: E0227 19:17:25.552908 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19242b3f-f738-49a0-be1b-578a62ec5f22" containerName="mariadb-database-create" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.552916 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="19242b3f-f738-49a0-be1b-578a62ec5f22" containerName="mariadb-database-create" Feb 27 19:17:25 crc kubenswrapper[4981]: E0227 19:17:25.552930 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecdc9fdc-fd28-4bdc-8403-934067c2ec3d" containerName="registry-server" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.552939 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecdc9fdc-fd28-4bdc-8403-934067c2ec3d" containerName="registry-server" Feb 27 19:17:25 crc kubenswrapper[4981]: E0227 19:17:25.552953 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32" containerName="mariadb-account-create-update" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.552962 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32" containerName="mariadb-account-create-update" Feb 27 19:17:25 crc kubenswrapper[4981]: E0227 19:17:25.552977 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d4e8b93-9791-4889-8e5f-5a3938235441" containerName="dnsmasq-dns" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.552984 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d4e8b93-9791-4889-8e5f-5a3938235441" containerName="dnsmasq-dns" Feb 27 19:17:25 crc kubenswrapper[4981]: E0227 19:17:25.552997 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eccdd187-3938-4331-82f9-b5dac2e9c1c1" containerName="mariadb-account-create-update" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.553006 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="eccdd187-3938-4331-82f9-b5dac2e9c1c1" containerName="mariadb-account-create-update" Feb 27 19:17:25 crc kubenswrapper[4981]: E0227 19:17:25.553020 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac1eaf7-6fea-4dae-b8f3-b81615d30ee0" containerName="mariadb-account-create-update" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.553028 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac1eaf7-6fea-4dae-b8f3-b81615d30ee0" containerName="mariadb-account-create-update" Feb 27 19:17:25 crc kubenswrapper[4981]: E0227 19:17:25.553048 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecdc9fdc-fd28-4bdc-8403-934067c2ec3d" containerName="extract-content" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.553075 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecdc9fdc-fd28-4bdc-8403-934067c2ec3d" containerName="extract-content" Feb 27 19:17:25 crc kubenswrapper[4981]: E0227 19:17:25.553089 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0c2fbd-f556-4dba-a374-4f212f96210a" containerName="mariadb-database-create" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.553097 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0c2fbd-f556-4dba-a374-4f212f96210a" containerName="mariadb-database-create" Feb 27 19:17:25 crc kubenswrapper[4981]: E0227 19:17:25.553117 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d4e8b93-9791-4889-8e5f-5a3938235441" containerName="init" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.553125 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d4e8b93-9791-4889-8e5f-5a3938235441" containerName="init" Feb 27 19:17:25 crc kubenswrapper[4981]: E0227 19:17:25.553146 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecdc9fdc-fd28-4bdc-8403-934067c2ec3d" containerName="extract-utilities" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.553157 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecdc9fdc-fd28-4bdc-8403-934067c2ec3d" containerName="extract-utilities" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.553383 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="19242b3f-f738-49a0-be1b-578a62ec5f22" containerName="mariadb-database-create" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.553399 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="eccdd187-3938-4331-82f9-b5dac2e9c1c1" containerName="mariadb-account-create-update" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.553424 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecdc9fdc-fd28-4bdc-8403-934067c2ec3d" containerName="registry-server" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.553439 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="99c593ba-9134-4372-8392-6903d47aba28" containerName="mariadb-database-create" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.553456 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d4e8b93-9791-4889-8e5f-5a3938235441" containerName="dnsmasq-dns" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.553470 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32" containerName="mariadb-account-create-update" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.553484 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="eac1eaf7-6fea-4dae-b8f3-b81615d30ee0" containerName="mariadb-account-create-update" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.553498 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed0c2fbd-f556-4dba-a374-4f212f96210a" containerName="mariadb-database-create" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.559458 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-clm2b" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.564695 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8rxc8" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.565007 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.565249 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.565490 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-clm2b"] Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.654544 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b4a8933-c57c-4c72-ba77-e6b637a282ee-scripts\") pod \"nova-cell0-conductor-db-sync-clm2b\" (UID: \"3b4a8933-c57c-4c72-ba77-e6b637a282ee\") " pod="openstack/nova-cell0-conductor-db-sync-clm2b" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.654846 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5qcg\" (UniqueName: \"kubernetes.io/projected/3b4a8933-c57c-4c72-ba77-e6b637a282ee-kube-api-access-g5qcg\") pod \"nova-cell0-conductor-db-sync-clm2b\" (UID: \"3b4a8933-c57c-4c72-ba77-e6b637a282ee\") " pod="openstack/nova-cell0-conductor-db-sync-clm2b" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.654918 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4a8933-c57c-4c72-ba77-e6b637a282ee-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-clm2b\" (UID: \"3b4a8933-c57c-4c72-ba77-e6b637a282ee\") " pod="openstack/nova-cell0-conductor-db-sync-clm2b" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.655185 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b4a8933-c57c-4c72-ba77-e6b637a282ee-config-data\") pod \"nova-cell0-conductor-db-sync-clm2b\" (UID: \"3b4a8933-c57c-4c72-ba77-e6b637a282ee\") " pod="openstack/nova-cell0-conductor-db-sync-clm2b" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.759299 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5qcg\" (UniqueName: \"kubernetes.io/projected/3b4a8933-c57c-4c72-ba77-e6b637a282ee-kube-api-access-g5qcg\") pod \"nova-cell0-conductor-db-sync-clm2b\" (UID: \"3b4a8933-c57c-4c72-ba77-e6b637a282ee\") " pod="openstack/nova-cell0-conductor-db-sync-clm2b" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.759811 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4a8933-c57c-4c72-ba77-e6b637a282ee-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-clm2b\" (UID: \"3b4a8933-c57c-4c72-ba77-e6b637a282ee\") " pod="openstack/nova-cell0-conductor-db-sync-clm2b" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.761344 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b4a8933-c57c-4c72-ba77-e6b637a282ee-config-data\") pod \"nova-cell0-conductor-db-sync-clm2b\" (UID: \"3b4a8933-c57c-4c72-ba77-e6b637a282ee\") " pod="openstack/nova-cell0-conductor-db-sync-clm2b" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.761468 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b4a8933-c57c-4c72-ba77-e6b637a282ee-scripts\") pod \"nova-cell0-conductor-db-sync-clm2b\" (UID: \"3b4a8933-c57c-4c72-ba77-e6b637a282ee\") " pod="openstack/nova-cell0-conductor-db-sync-clm2b" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.769861 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4a8933-c57c-4c72-ba77-e6b637a282ee-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-clm2b\" (UID: \"3b4a8933-c57c-4c72-ba77-e6b637a282ee\") " pod="openstack/nova-cell0-conductor-db-sync-clm2b" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.770655 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b4a8933-c57c-4c72-ba77-e6b637a282ee-config-data\") pod \"nova-cell0-conductor-db-sync-clm2b\" (UID: \"3b4a8933-c57c-4c72-ba77-e6b637a282ee\") " pod="openstack/nova-cell0-conductor-db-sync-clm2b" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.782581 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b4a8933-c57c-4c72-ba77-e6b637a282ee-scripts\") pod \"nova-cell0-conductor-db-sync-clm2b\" (UID: \"3b4a8933-c57c-4c72-ba77-e6b637a282ee\") " pod="openstack/nova-cell0-conductor-db-sync-clm2b" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.804704 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5qcg\" (UniqueName: \"kubernetes.io/projected/3b4a8933-c57c-4c72-ba77-e6b637a282ee-kube-api-access-g5qcg\") pod \"nova-cell0-conductor-db-sync-clm2b\" (UID: \"3b4a8933-c57c-4c72-ba77-e6b637a282ee\") " pod="openstack/nova-cell0-conductor-db-sync-clm2b" Feb 27 19:17:25 crc kubenswrapper[4981]: I0227 19:17:25.893832 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-clm2b" Feb 27 19:17:26 crc kubenswrapper[4981]: I0227 19:17:26.083083 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:26 crc kubenswrapper[4981]: I0227 19:17:26.085560 4981 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 19:17:26 crc kubenswrapper[4981]: I0227 19:17:26.086794 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:26 crc kubenswrapper[4981]: I0227 19:17:26.088445 4981 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 19:17:26 crc kubenswrapper[4981]: I0227 19:17:26.088457 4981 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 19:17:26 crc kubenswrapper[4981]: I0227 19:17:26.467789 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-clm2b"] Feb 27 19:17:26 crc kubenswrapper[4981]: I0227 19:17:26.484713 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 19:17:26 crc kubenswrapper[4981]: I0227 19:17:26.494460 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 19:17:27 crc kubenswrapper[4981]: I0227 19:17:27.094207 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-clm2b" event={"ID":"3b4a8933-c57c-4c72-ba77-e6b637a282ee","Type":"ContainerStarted","Data":"3374d8ae06c19d6183b6df6f336928ded9c7055aa69d4764c21afb66d028365f"} Feb 27 19:17:27 crc kubenswrapper[4981]: I0227 19:17:27.886326 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 19:17:29 crc kubenswrapper[4981]: I0227 19:17:29.117456 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0491c487-f89e-4ce4-ad63-323ad7624487" containerName="glance-log" containerID="cri-o://5e181015ebaa0f3e4a87d5a2fd2fa0cec47474fd6f1f3e6e3ce25f06cffdcb82" gracePeriod=30 Feb 27 19:17:29 crc kubenswrapper[4981]: I0227 19:17:29.117551 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0491c487-f89e-4ce4-ad63-323ad7624487" containerName="glance-httpd" containerID="cri-o://cac0b92f8247c85d31a507b6ae253c037102dd71220d47042a591b3da9346b43" gracePeriod=30 Feb 27 19:17:29 crc kubenswrapper[4981]: I0227 19:17:29.572920 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 19:17:29 crc kubenswrapper[4981]: I0227 19:17:29.573312 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e4177f13-d8db-4bc0-b8d0-ab10ad41141a" containerName="glance-log" containerID="cri-o://ea037a2a3049ae2cd6efb3f77f7b56bfc162c209d7183395934e2ad212d932e0" gracePeriod=30 Feb 27 19:17:29 crc kubenswrapper[4981]: I0227 19:17:29.573468 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e4177f13-d8db-4bc0-b8d0-ab10ad41141a" containerName="glance-httpd" containerID="cri-o://104ad2208f27454f8e4b776bd22c70b0029b5027f407230759c896d3d4ea0beb" gracePeriod=30 Feb 27 19:17:30 crc kubenswrapper[4981]: I0227 19:17:30.130995 4981 generic.go:334] "Generic (PLEG): container finished" podID="e4177f13-d8db-4bc0-b8d0-ab10ad41141a" containerID="ea037a2a3049ae2cd6efb3f77f7b56bfc162c209d7183395934e2ad212d932e0" exitCode=143 Feb 27 19:17:30 crc kubenswrapper[4981]: I0227 19:17:30.131091 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4177f13-d8db-4bc0-b8d0-ab10ad41141a","Type":"ContainerDied","Data":"ea037a2a3049ae2cd6efb3f77f7b56bfc162c209d7183395934e2ad212d932e0"} Feb 27 19:17:30 crc kubenswrapper[4981]: I0227 19:17:30.135002 4981 generic.go:334] "Generic (PLEG): container finished" podID="0491c487-f89e-4ce4-ad63-323ad7624487" containerID="5e181015ebaa0f3e4a87d5a2fd2fa0cec47474fd6f1f3e6e3ce25f06cffdcb82" exitCode=143 Feb 27 19:17:30 crc kubenswrapper[4981]: I0227 19:17:30.135037 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0491c487-f89e-4ce4-ad63-323ad7624487","Type":"ContainerDied","Data":"5e181015ebaa0f3e4a87d5a2fd2fa0cec47474fd6f1f3e6e3ce25f06cffdcb82"} Feb 27 19:17:30 crc kubenswrapper[4981]: I0227 19:17:30.629157 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:17:30 crc kubenswrapper[4981]: I0227 19:17:30.726569 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:17:30 crc kubenswrapper[4981]: I0227 19:17:30.726880 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d6d3674-eb9a-4631-998a-73b14544d6d8" containerName="ceilometer-central-agent" containerID="cri-o://625319f63b3d20f0554f150a1b2e887bb9d72173a0ada06f323ca047d1e9d077" gracePeriod=30 Feb 27 19:17:30 crc kubenswrapper[4981]: I0227 19:17:30.727034 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d6d3674-eb9a-4631-998a-73b14544d6d8" containerName="proxy-httpd" containerID="cri-o://4447f82a59c10c570adba5ae0b788f7ef303757c8f74ccd148d1529101b045c5" gracePeriod=30 Feb 27 19:17:30 crc kubenswrapper[4981]: I0227 19:17:30.727109 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d6d3674-eb9a-4631-998a-73b14544d6d8" containerName="sg-core" containerID="cri-o://154021a6ea5eb18406c23a7c17cb55e340445beff1d1a6b1d4b62e023d46890d" gracePeriod=30 Feb 27 19:17:30 crc kubenswrapper[4981]: I0227 19:17:30.727103 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d6d3674-eb9a-4631-998a-73b14544d6d8" containerName="ceilometer-notification-agent" containerID="cri-o://641fc7ebec38cf90e79555e2636b8a9f114beaaf5c095283d1a356214e3b075c" gracePeriod=30 Feb 27 19:17:31 crc kubenswrapper[4981]: I0227 19:17:31.150103 4981 generic.go:334] "Generic (PLEG): container finished" podID="5d6d3674-eb9a-4631-998a-73b14544d6d8" containerID="4447f82a59c10c570adba5ae0b788f7ef303757c8f74ccd148d1529101b045c5" exitCode=0 Feb 27 19:17:31 crc kubenswrapper[4981]: I0227 19:17:31.150395 4981 generic.go:334] "Generic (PLEG): container finished" podID="5d6d3674-eb9a-4631-998a-73b14544d6d8" containerID="154021a6ea5eb18406c23a7c17cb55e340445beff1d1a6b1d4b62e023d46890d" exitCode=2 Feb 27 19:17:31 crc kubenswrapper[4981]: I0227 19:17:31.150424 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d6d3674-eb9a-4631-998a-73b14544d6d8","Type":"ContainerDied","Data":"4447f82a59c10c570adba5ae0b788f7ef303757c8f74ccd148d1529101b045c5"} Feb 27 19:17:31 crc kubenswrapper[4981]: I0227 19:17:31.150457 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d6d3674-eb9a-4631-998a-73b14544d6d8","Type":"ContainerDied","Data":"154021a6ea5eb18406c23a7c17cb55e340445beff1d1a6b1d4b62e023d46890d"} Feb 27 19:17:32 crc kubenswrapper[4981]: I0227 19:17:32.161797 4981 generic.go:334] "Generic (PLEG): container finished" podID="5d6d3674-eb9a-4631-998a-73b14544d6d8" containerID="625319f63b3d20f0554f150a1b2e887bb9d72173a0ada06f323ca047d1e9d077" exitCode=0 Feb 27 19:17:32 crc kubenswrapper[4981]: I0227 19:17:32.161852 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d6d3674-eb9a-4631-998a-73b14544d6d8","Type":"ContainerDied","Data":"625319f63b3d20f0554f150a1b2e887bb9d72173a0ada06f323ca047d1e9d077"} Feb 27 19:17:33 crc kubenswrapper[4981]: I0227 19:17:33.174300 4981 generic.go:334] "Generic (PLEG): container finished" podID="5d6d3674-eb9a-4631-998a-73b14544d6d8" containerID="641fc7ebec38cf90e79555e2636b8a9f114beaaf5c095283d1a356214e3b075c" exitCode=0 Feb 27 19:17:33 crc kubenswrapper[4981]: I0227 19:17:33.174375 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d6d3674-eb9a-4631-998a-73b14544d6d8","Type":"ContainerDied","Data":"641fc7ebec38cf90e79555e2636b8a9f114beaaf5c095283d1a356214e3b075c"} Feb 27 19:17:33 crc kubenswrapper[4981]: I0227 19:17:33.177274 4981 generic.go:334] "Generic (PLEG): container finished" podID="0491c487-f89e-4ce4-ad63-323ad7624487" containerID="cac0b92f8247c85d31a507b6ae253c037102dd71220d47042a591b3da9346b43" exitCode=0 Feb 27 19:17:33 crc kubenswrapper[4981]: I0227 19:17:33.177327 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0491c487-f89e-4ce4-ad63-323ad7624487","Type":"ContainerDied","Data":"cac0b92f8247c85d31a507b6ae253c037102dd71220d47042a591b3da9346b43"} Feb 27 19:17:34 crc kubenswrapper[4981]: I0227 19:17:34.190537 4981 generic.go:334] "Generic (PLEG): container finished" podID="e4177f13-d8db-4bc0-b8d0-ab10ad41141a" containerID="104ad2208f27454f8e4b776bd22c70b0029b5027f407230759c896d3d4ea0beb" exitCode=0 Feb 27 19:17:34 crc kubenswrapper[4981]: I0227 19:17:34.190595 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4177f13-d8db-4bc0-b8d0-ab10ad41141a","Type":"ContainerDied","Data":"104ad2208f27454f8e4b776bd22c70b0029b5027f407230759c896d3d4ea0beb"} Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.225915 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerStarted","Data":"3eaaac0016632062e717d6fc785b5e6a960c31de063fc4ec9f829edb351d80fe"} Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.776294 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.784572 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.809510 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-combined-ca-bundle\") pod \"5d6d3674-eb9a-4631-998a-73b14544d6d8\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.809818 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-public-tls-certs\") pod \"0491c487-f89e-4ce4-ad63-323ad7624487\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.809877 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-scripts\") pod \"5d6d3674-eb9a-4631-998a-73b14544d6d8\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.809936 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-scripts\") pod \"0491c487-f89e-4ce4-ad63-323ad7624487\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.810048 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d6d3674-eb9a-4631-998a-73b14544d6d8-run-httpd\") pod \"5d6d3674-eb9a-4631-998a-73b14544d6d8\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.810118 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"0491c487-f89e-4ce4-ad63-323ad7624487\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.810185 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbkv4\" (UniqueName: \"kubernetes.io/projected/5d6d3674-eb9a-4631-998a-73b14544d6d8-kube-api-access-zbkv4\") pod \"5d6d3674-eb9a-4631-998a-73b14544d6d8\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.810230 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d6d3674-eb9a-4631-998a-73b14544d6d8-log-httpd\") pod \"5d6d3674-eb9a-4631-998a-73b14544d6d8\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.810259 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0491c487-f89e-4ce4-ad63-323ad7624487-httpd-run\") pod \"0491c487-f89e-4ce4-ad63-323ad7624487\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.810287 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-config-data\") pod \"5d6d3674-eb9a-4631-998a-73b14544d6d8\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.810315 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-sg-core-conf-yaml\") pod \"5d6d3674-eb9a-4631-998a-73b14544d6d8\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.810378 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0491c487-f89e-4ce4-ad63-323ad7624487-logs\") pod \"0491c487-f89e-4ce4-ad63-323ad7624487\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.810428 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-combined-ca-bundle\") pod \"0491c487-f89e-4ce4-ad63-323ad7624487\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.810459 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-config-data\") pod \"0491c487-f89e-4ce4-ad63-323ad7624487\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.810485 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnhwz\" (UniqueName: \"kubernetes.io/projected/0491c487-f89e-4ce4-ad63-323ad7624487-kube-api-access-fnhwz\") pod \"0491c487-f89e-4ce4-ad63-323ad7624487\" (UID: \"0491c487-f89e-4ce4-ad63-323ad7624487\") " Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.810754 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d6d3674-eb9a-4631-998a-73b14544d6d8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5d6d3674-eb9a-4631-998a-73b14544d6d8" (UID: "5d6d3674-eb9a-4631-998a-73b14544d6d8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.810927 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d6d3674-eb9a-4631-998a-73b14544d6d8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5d6d3674-eb9a-4631-998a-73b14544d6d8" (UID: "5d6d3674-eb9a-4631-998a-73b14544d6d8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.811111 4981 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d6d3674-eb9a-4631-998a-73b14544d6d8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.811136 4981 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d6d3674-eb9a-4631-998a-73b14544d6d8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.811411 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0491c487-f89e-4ce4-ad63-323ad7624487-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0491c487-f89e-4ce4-ad63-323ad7624487" (UID: "0491c487-f89e-4ce4-ad63-323ad7624487"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.811508 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0491c487-f89e-4ce4-ad63-323ad7624487-logs" (OuterVolumeSpecName: "logs") pod "0491c487-f89e-4ce4-ad63-323ad7624487" (UID: "0491c487-f89e-4ce4-ad63-323ad7624487"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.820396 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-scripts" (OuterVolumeSpecName: "scripts") pod "0491c487-f89e-4ce4-ad63-323ad7624487" (UID: "0491c487-f89e-4ce4-ad63-323ad7624487"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.822336 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d6d3674-eb9a-4631-998a-73b14544d6d8-kube-api-access-zbkv4" (OuterVolumeSpecName: "kube-api-access-zbkv4") pod "5d6d3674-eb9a-4631-998a-73b14544d6d8" (UID: "5d6d3674-eb9a-4631-998a-73b14544d6d8"). InnerVolumeSpecName "kube-api-access-zbkv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.822475 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "0491c487-f89e-4ce4-ad63-323ad7624487" (UID: "0491c487-f89e-4ce4-ad63-323ad7624487"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.823523 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-scripts" (OuterVolumeSpecName: "scripts") pod "5d6d3674-eb9a-4631-998a-73b14544d6d8" (UID: "5d6d3674-eb9a-4631-998a-73b14544d6d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.825403 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0491c487-f89e-4ce4-ad63-323ad7624487-kube-api-access-fnhwz" (OuterVolumeSpecName: "kube-api-access-fnhwz") pod "0491c487-f89e-4ce4-ad63-323ad7624487" (UID: "0491c487-f89e-4ce4-ad63-323ad7624487"). InnerVolumeSpecName "kube-api-access-fnhwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.915001 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.915084 4981 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.915101 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbkv4\" (UniqueName: \"kubernetes.io/projected/5d6d3674-eb9a-4631-998a-73b14544d6d8-kube-api-access-zbkv4\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.915115 4981 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0491c487-f89e-4ce4-ad63-323ad7624487-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.915128 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0491c487-f89e-4ce4-ad63-323ad7624487-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.915140 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnhwz\" (UniqueName: \"kubernetes.io/projected/0491c487-f89e-4ce4-ad63-323ad7624487-kube-api-access-fnhwz\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.915161 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.922501 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0491c487-f89e-4ce4-ad63-323ad7624487" (UID: "0491c487-f89e-4ce4-ad63-323ad7624487"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.960187 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5d6d3674-eb9a-4631-998a-73b14544d6d8" (UID: "5d6d3674-eb9a-4631-998a-73b14544d6d8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:36 crc kubenswrapper[4981]: I0227 19:17:36.981177 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-config-data" (OuterVolumeSpecName: "config-data") pod "0491c487-f89e-4ce4-ad63-323ad7624487" (UID: "0491c487-f89e-4ce4-ad63-323ad7624487"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.000876 4981 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.029028 4981 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.029290 4981 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.029307 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.029409 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:37 crc kubenswrapper[4981]: E0227 19:17:37.033825 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-combined-ca-bundle podName:5d6d3674-eb9a-4631-998a-73b14544d6d8 nodeName:}" failed. No retries permitted until 2026-02-27 19:17:37.533763436 +0000 UTC m=+1957.012544636 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-combined-ca-bundle") pod "5d6d3674-eb9a-4631-998a-73b14544d6d8" (UID: "5d6d3674-eb9a-4631-998a-73b14544d6d8") : error deleting /var/lib/kubelet/pods/5d6d3674-eb9a-4631-998a-73b14544d6d8/volume-subpaths: remove /var/lib/kubelet/pods/5d6d3674-eb9a-4631-998a-73b14544d6d8/volume-subpaths: no such file or directory Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.038177 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0491c487-f89e-4ce4-ad63-323ad7624487" (UID: "0491c487-f89e-4ce4-ad63-323ad7624487"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.042228 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-config-data" (OuterVolumeSpecName: "config-data") pod "5d6d3674-eb9a-4631-998a-73b14544d6d8" (UID: "5d6d3674-eb9a-4631-998a-73b14544d6d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.131747 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.131784 4981 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0491c487-f89e-4ce4-ad63-323ad7624487-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.240303 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e4177f13-d8db-4bc0-b8d0-ab10ad41141a","Type":"ContainerDied","Data":"2036f916a3f9d0c2e8aa09963d8dba9504a8bcd17c6e731fdac399639c3dda55"} Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.240348 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2036f916a3f9d0c2e8aa09963d8dba9504a8bcd17c6e731fdac399639c3dda55" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.244921 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d6d3674-eb9a-4631-998a-73b14544d6d8","Type":"ContainerDied","Data":"f1397479d9da71e8b305d7b9a79f92bcce2dd04a7850c6e2be9abe0c5072592c"} Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.244978 4981 scope.go:117] "RemoveContainer" containerID="4447f82a59c10c570adba5ae0b788f7ef303757c8f74ccd148d1529101b045c5" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.244975 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.246279 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.248475 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-clm2b" event={"ID":"3b4a8933-c57c-4c72-ba77-e6b637a282ee","Type":"ContainerStarted","Data":"ad751a2a3b8f1441777f905245851d231f1a02d49eb9a9ac2a1fa328f8c6d264"} Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.253750 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0491c487-f89e-4ce4-ad63-323ad7624487","Type":"ContainerDied","Data":"d8deb19685d07865d0e26af1912b05d45c45e7b836f6e171faecb6e3bd0d23f3"} Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.253832 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.265754 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-clm2b" podStartSLOduration=2.111796705 podStartE2EDuration="12.265698509s" podCreationTimestamp="2026-02-27 19:17:25 +0000 UTC" firstStartedPulling="2026-02-27 19:17:26.477844731 +0000 UTC m=+1945.956625891" lastFinishedPulling="2026-02-27 19:17:36.631746535 +0000 UTC m=+1956.110527695" observedRunningTime="2026-02-27 19:17:37.263177902 +0000 UTC m=+1956.741959062" watchObservedRunningTime="2026-02-27 19:17:37.265698509 +0000 UTC m=+1956.744479659" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.272144 4981 scope.go:117] "RemoveContainer" containerID="154021a6ea5eb18406c23a7c17cb55e340445beff1d1a6b1d4b62e023d46890d" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.308551 4981 scope.go:117] "RemoveContainer" containerID="641fc7ebec38cf90e79555e2636b8a9f114beaaf5c095283d1a356214e3b075c" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.334515 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-combined-ca-bundle\") pod \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.334796 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.334833 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-scripts\") pod \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.334937 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-httpd-run\") pod \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.335010 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-logs\") pod \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.335027 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-internal-tls-certs\") pod \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.335044 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-config-data\") pod \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.335081 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2pzm\" (UniqueName: \"kubernetes.io/projected/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-kube-api-access-z2pzm\") pod \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\" (UID: \"e4177f13-d8db-4bc0-b8d0-ab10ad41141a\") " Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.342601 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.343641 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e4177f13-d8db-4bc0-b8d0-ab10ad41141a" (UID: "e4177f13-d8db-4bc0-b8d0-ab10ad41141a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.344089 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-logs" (OuterVolumeSpecName: "logs") pod "e4177f13-d8db-4bc0-b8d0-ab10ad41141a" (UID: "e4177f13-d8db-4bc0-b8d0-ab10ad41141a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.348323 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "e4177f13-d8db-4bc0-b8d0-ab10ad41141a" (UID: "e4177f13-d8db-4bc0-b8d0-ab10ad41141a"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.354938 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.366579 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-kube-api-access-z2pzm" (OuterVolumeSpecName: "kube-api-access-z2pzm") pod "e4177f13-d8db-4bc0-b8d0-ab10ad41141a" (UID: "e4177f13-d8db-4bc0-b8d0-ab10ad41141a"). InnerVolumeSpecName "kube-api-access-z2pzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.366867 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-scripts" (OuterVolumeSpecName: "scripts") pod "e4177f13-d8db-4bc0-b8d0-ab10ad41141a" (UID: "e4177f13-d8db-4bc0-b8d0-ab10ad41141a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.367715 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 19:17:37 crc kubenswrapper[4981]: E0227 19:17:37.368231 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4177f13-d8db-4bc0-b8d0-ab10ad41141a" containerName="glance-httpd" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.368251 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4177f13-d8db-4bc0-b8d0-ab10ad41141a" containerName="glance-httpd" Feb 27 19:17:37 crc kubenswrapper[4981]: E0227 19:17:37.368269 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6d3674-eb9a-4631-998a-73b14544d6d8" containerName="ceilometer-central-agent" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.368278 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6d3674-eb9a-4631-998a-73b14544d6d8" containerName="ceilometer-central-agent" Feb 27 19:17:37 crc kubenswrapper[4981]: E0227 19:17:37.368296 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0491c487-f89e-4ce4-ad63-323ad7624487" containerName="glance-httpd" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.368303 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0491c487-f89e-4ce4-ad63-323ad7624487" containerName="glance-httpd" Feb 27 19:17:37 crc kubenswrapper[4981]: E0227 19:17:37.368315 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6d3674-eb9a-4631-998a-73b14544d6d8" containerName="ceilometer-notification-agent" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.368322 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6d3674-eb9a-4631-998a-73b14544d6d8" containerName="ceilometer-notification-agent" Feb 27 19:17:37 crc kubenswrapper[4981]: E0227 19:17:37.368334 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6d3674-eb9a-4631-998a-73b14544d6d8" containerName="sg-core" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.368341 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6d3674-eb9a-4631-998a-73b14544d6d8" containerName="sg-core" Feb 27 19:17:37 crc kubenswrapper[4981]: E0227 19:17:37.368372 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6d3674-eb9a-4631-998a-73b14544d6d8" containerName="proxy-httpd" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.368379 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6d3674-eb9a-4631-998a-73b14544d6d8" containerName="proxy-httpd" Feb 27 19:17:37 crc kubenswrapper[4981]: E0227 19:17:37.368389 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0491c487-f89e-4ce4-ad63-323ad7624487" containerName="glance-log" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.368398 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0491c487-f89e-4ce4-ad63-323ad7624487" containerName="glance-log" Feb 27 19:17:37 crc kubenswrapper[4981]: E0227 19:17:37.368406 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4177f13-d8db-4bc0-b8d0-ab10ad41141a" containerName="glance-log" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.368412 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4177f13-d8db-4bc0-b8d0-ab10ad41141a" containerName="glance-log" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.368935 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d6d3674-eb9a-4631-998a-73b14544d6d8" containerName="proxy-httpd" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.368956 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4177f13-d8db-4bc0-b8d0-ab10ad41141a" containerName="glance-log" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.368969 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0491c487-f89e-4ce4-ad63-323ad7624487" containerName="glance-httpd" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.368985 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0491c487-f89e-4ce4-ad63-323ad7624487" containerName="glance-log" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.368993 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4177f13-d8db-4bc0-b8d0-ab10ad41141a" containerName="glance-httpd" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.369003 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d6d3674-eb9a-4631-998a-73b14544d6d8" containerName="ceilometer-notification-agent" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.369016 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d6d3674-eb9a-4631-998a-73b14544d6d8" containerName="ceilometer-central-agent" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.369024 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d6d3674-eb9a-4631-998a-73b14544d6d8" containerName="sg-core" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.370018 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.386897 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.387379 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.410174 4981 scope.go:117] "RemoveContainer" containerID="625319f63b3d20f0554f150a1b2e887bb9d72173a0ada06f323ca047d1e9d077" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.436949 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.438579 4981 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.438605 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.438617 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2pzm\" (UniqueName: \"kubernetes.io/projected/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-kube-api-access-z2pzm\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.438646 4981 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.438655 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.466455 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4177f13-d8db-4bc0-b8d0-ab10ad41141a" (UID: "e4177f13-d8db-4bc0-b8d0-ab10ad41141a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.466955 4981 scope.go:117] "RemoveContainer" containerID="cac0b92f8247c85d31a507b6ae253c037102dd71220d47042a591b3da9346b43" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.478091 4981 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.492319 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e4177f13-d8db-4bc0-b8d0-ab10ad41141a" (UID: "e4177f13-d8db-4bc0-b8d0-ab10ad41141a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.499384 4981 scope.go:117] "RemoveContainer" containerID="5e181015ebaa0f3e4a87d5a2fd2fa0cec47474fd6f1f3e6e3ce25f06cffdcb82" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.533669 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-config-data" (OuterVolumeSpecName: "config-data") pod "e4177f13-d8db-4bc0-b8d0-ab10ad41141a" (UID: "e4177f13-d8db-4bc0-b8d0-ab10ad41141a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.540080 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-combined-ca-bundle\") pod \"5d6d3674-eb9a-4631-998a-73b14544d6d8\" (UID: \"5d6d3674-eb9a-4631-998a-73b14544d6d8\") " Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.540367 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-config-data\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.540685 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.540736 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-476qt\" (UniqueName: \"kubernetes.io/projected/c1bafd9d-a283-406e-900b-3c5d1aae55fe-kube-api-access-476qt\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.540793 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.540822 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.540852 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-scripts\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.540887 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1bafd9d-a283-406e-900b-3c5d1aae55fe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.540945 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1bafd9d-a283-406e-900b-3c5d1aae55fe-logs\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.541132 4981 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.541227 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.541304 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4177f13-d8db-4bc0-b8d0-ab10ad41141a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.541382 4981 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.546530 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d6d3674-eb9a-4631-998a-73b14544d6d8" (UID: "5d6d3674-eb9a-4631-998a-73b14544d6d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.642755 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0491c487-f89e-4ce4-ad63-323ad7624487" path="/var/lib/kubelet/pods/0491c487-f89e-4ce4-ad63-323ad7624487/volumes" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.642936 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1bafd9d-a283-406e-900b-3c5d1aae55fe-logs\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.642978 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-config-data\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.643015 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.643043 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-476qt\" (UniqueName: \"kubernetes.io/projected/c1bafd9d-a283-406e-900b-3c5d1aae55fe-kube-api-access-476qt\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.643091 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.643119 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.643144 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-scripts\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.643189 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1bafd9d-a283-406e-900b-3c5d1aae55fe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.643262 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d6d3674-eb9a-4631-998a-73b14544d6d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.643905 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1bafd9d-a283-406e-900b-3c5d1aae55fe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.643991 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1bafd9d-a283-406e-900b-3c5d1aae55fe-logs\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.644404 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.648425 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.656023 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.658775 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-scripts\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.659848 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-config-data\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.663464 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-476qt\" (UniqueName: \"kubernetes.io/projected/c1bafd9d-a283-406e-900b-3c5d1aae55fe-kube-api-access-476qt\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.675939 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.735401 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.891206 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.910235 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.931107 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.933535 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.949814 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.958262 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 19:17:37 crc kubenswrapper[4981]: I0227 19:17:37.959256 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.054447 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-config-data\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.054524 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.054573 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76944f66-865c-4848-b348-ff9e65e9220b-run-httpd\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.054626 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.054714 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pld4q\" (UniqueName: \"kubernetes.io/projected/76944f66-865c-4848-b348-ff9e65e9220b-kube-api-access-pld4q\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.054740 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76944f66-865c-4848-b348-ff9e65e9220b-log-httpd\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.054759 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-scripts\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.158499 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.158588 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pld4q\" (UniqueName: \"kubernetes.io/projected/76944f66-865c-4848-b348-ff9e65e9220b-kube-api-access-pld4q\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.158609 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76944f66-865c-4848-b348-ff9e65e9220b-log-httpd\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.159007 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-scripts\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.159359 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76944f66-865c-4848-b348-ff9e65e9220b-log-httpd\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.159566 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-config-data\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.160571 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.160672 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76944f66-865c-4848-b348-ff9e65e9220b-run-httpd\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.161120 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76944f66-865c-4848-b348-ff9e65e9220b-run-httpd\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.179550 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-config-data\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.184850 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.194920 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.202273 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-scripts\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.220240 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pld4q\" (UniqueName: \"kubernetes.io/projected/76944f66-865c-4848-b348-ff9e65e9220b-kube-api-access-pld4q\") pod \"ceilometer-0\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.267186 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.338827 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.356608 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.366396 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.368342 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.368589 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.371005 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.371346 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.391731 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 19:17:38 crc kubenswrapper[4981]: W0227 19:17:38.497793 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1bafd9d_a283_406e_900b_3c5d1aae55fe.slice/crio-5f4fc666f17be0900b72ee1b730d61306746ca2c93508d547a052c97f9e5ac28 WatchSource:0}: Error finding container 5f4fc666f17be0900b72ee1b730d61306746ca2c93508d547a052c97f9e5ac28: Status 404 returned error can't find the container with id 5f4fc666f17be0900b72ee1b730d61306746ca2c93508d547a052c97f9e5ac28 Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.504334 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.567899 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9sk9\" (UniqueName: \"kubernetes.io/projected/0aa05f73-e7d2-440b-ab1f-780f23c26272-kube-api-access-z9sk9\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.567979 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.568071 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa05f73-e7d2-440b-ab1f-780f23c26272-logs\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.568165 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0aa05f73-e7d2-440b-ab1f-780f23c26272-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.568263 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.568349 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.568396 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.568443 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.670276 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.670323 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.670362 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.670406 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9sk9\" (UniqueName: \"kubernetes.io/projected/0aa05f73-e7d2-440b-ab1f-780f23c26272-kube-api-access-z9sk9\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.670422 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.670473 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa05f73-e7d2-440b-ab1f-780f23c26272-logs\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.670520 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0aa05f73-e7d2-440b-ab1f-780f23c26272-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.670557 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.671925 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa05f73-e7d2-440b-ab1f-780f23c26272-logs\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.674018 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.674176 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0aa05f73-e7d2-440b-ab1f-780f23c26272-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.681192 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.683740 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.685949 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.690637 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.695780 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9sk9\" (UniqueName: \"kubernetes.io/projected/0aa05f73-e7d2-440b-ab1f-780f23c26272-kube-api-access-z9sk9\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.712110 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " pod="openstack/glance-default-internal-api-0" Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.834811 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:17:38 crc kubenswrapper[4981]: I0227 19:17:38.992948 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:39 crc kubenswrapper[4981]: I0227 19:17:39.293730 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c1bafd9d-a283-406e-900b-3c5d1aae55fe","Type":"ContainerStarted","Data":"c56f4d215954c7c816813a5607ee1845d8bcd2e458593efdcc15c07e8b8dfdc9"} Feb 27 19:17:39 crc kubenswrapper[4981]: I0227 19:17:39.300554 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c1bafd9d-a283-406e-900b-3c5d1aae55fe","Type":"ContainerStarted","Data":"5f4fc666f17be0900b72ee1b730d61306746ca2c93508d547a052c97f9e5ac28"} Feb 27 19:17:39 crc kubenswrapper[4981]: I0227 19:17:39.306114 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76944f66-865c-4848-b348-ff9e65e9220b","Type":"ContainerStarted","Data":"2bc2ca4146c1dba99d8688f2e804ad240aabd5fefe566c47fc544ab053f0fc19"} Feb 27 19:17:39 crc kubenswrapper[4981]: I0227 19:17:39.596461 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 19:17:39 crc kubenswrapper[4981]: I0227 19:17:39.655317 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d6d3674-eb9a-4631-998a-73b14544d6d8" path="/var/lib/kubelet/pods/5d6d3674-eb9a-4631-998a-73b14544d6d8/volumes" Feb 27 19:17:39 crc kubenswrapper[4981]: I0227 19:17:39.656495 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4177f13-d8db-4bc0-b8d0-ab10ad41141a" path="/var/lib/kubelet/pods/e4177f13-d8db-4bc0-b8d0-ab10ad41141a/volumes" Feb 27 19:17:40 crc kubenswrapper[4981]: I0227 19:17:40.317259 4981 generic.go:334] "Generic (PLEG): container finished" podID="433a9f91-dd8c-4e01-9133-fe5e143bc696" containerID="31fade82185f1e83c1d90a9aa653996bc4068bb402a2e3fde43cb5775094559e" exitCode=0 Feb 27 19:17:40 crc kubenswrapper[4981]: I0227 19:17:40.317352 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9rclp" event={"ID":"433a9f91-dd8c-4e01-9133-fe5e143bc696","Type":"ContainerDied","Data":"31fade82185f1e83c1d90a9aa653996bc4068bb402a2e3fde43cb5775094559e"} Feb 27 19:17:40 crc kubenswrapper[4981]: I0227 19:17:40.322390 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c1bafd9d-a283-406e-900b-3c5d1aae55fe","Type":"ContainerStarted","Data":"9252062e4f9078c805d000ada9f14dcf8cf9e94119dfd3eee20804d3a99f7de4"} Feb 27 19:17:40 crc kubenswrapper[4981]: I0227 19:17:40.325305 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76944f66-865c-4848-b348-ff9e65e9220b","Type":"ContainerStarted","Data":"d677d3ff8573097e4490affce078550fedf68a838cdae9db86de154340c89d08"} Feb 27 19:17:40 crc kubenswrapper[4981]: I0227 19:17:40.333358 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0aa05f73-e7d2-440b-ab1f-780f23c26272","Type":"ContainerStarted","Data":"e2cf4cb69297e4610ad8e0dc0631d68b5d64796315f3d0d1dae971f44173c3b0"} Feb 27 19:17:40 crc kubenswrapper[4981]: I0227 19:17:40.373447 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.373423393 podStartE2EDuration="3.373423393s" podCreationTimestamp="2026-02-27 19:17:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:17:40.365076898 +0000 UTC m=+1959.843858048" watchObservedRunningTime="2026-02-27 19:17:40.373423393 +0000 UTC m=+1959.852204553" Feb 27 19:17:41 crc kubenswrapper[4981]: I0227 19:17:41.040529 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2b17-account-create-update-dhdlp"] Feb 27 19:17:41 crc kubenswrapper[4981]: I0227 19:17:41.056316 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-v2rmf"] Feb 27 19:17:41 crc kubenswrapper[4981]: I0227 19:17:41.066514 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9966-account-create-update-6d4f6"] Feb 27 19:17:41 crc kubenswrapper[4981]: I0227 19:17:41.082827 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-kk9rm"] Feb 27 19:17:41 crc kubenswrapper[4981]: I0227 19:17:41.096382 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-v2rmf"] Feb 27 19:17:41 crc kubenswrapper[4981]: I0227 19:17:41.113405 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-kk9rm"] Feb 27 19:17:41 crc kubenswrapper[4981]: I0227 19:17:41.125377 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2b17-account-create-update-dhdlp"] Feb 27 19:17:41 crc kubenswrapper[4981]: I0227 19:17:41.136445 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-9966-account-create-update-6d4f6"] Feb 27 19:17:41 crc kubenswrapper[4981]: I0227 19:17:41.384332 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0aa05f73-e7d2-440b-ab1f-780f23c26272","Type":"ContainerStarted","Data":"2affc55a229a8b585a8ade5bf43b2c239ea9c89cb121110f24bf358bb120da2a"} Feb 27 19:17:41 crc kubenswrapper[4981]: I0227 19:17:41.648149 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dec7952-ffcd-45f1-b788-669b9a76f577" path="/var/lib/kubelet/pods/0dec7952-ffcd-45f1-b788-669b9a76f577/volumes" Feb 27 19:17:41 crc kubenswrapper[4981]: I0227 19:17:41.649020 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3edce3d3-96e5-4fbe-8ef7-ba2d01d06025" path="/var/lib/kubelet/pods/3edce3d3-96e5-4fbe-8ef7-ba2d01d06025/volumes" Feb 27 19:17:41 crc kubenswrapper[4981]: I0227 19:17:41.649634 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5088f9d-7a73-4e90-bb3e-b66bc16b840f" path="/var/lib/kubelet/pods/a5088f9d-7a73-4e90-bb3e-b66bc16b840f/volumes" Feb 27 19:17:41 crc kubenswrapper[4981]: I0227 19:17:41.650355 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fae1f9f4-459b-4894-ba4c-db79218e7fb0" path="/var/lib/kubelet/pods/fae1f9f4-459b-4894-ba4c-db79218e7fb0/volumes" Feb 27 19:17:41 crc kubenswrapper[4981]: I0227 19:17:41.829436 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9rclp" Feb 27 19:17:41 crc kubenswrapper[4981]: I0227 19:17:41.935924 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grbjq\" (UniqueName: \"kubernetes.io/projected/433a9f91-dd8c-4e01-9133-fe5e143bc696-kube-api-access-grbjq\") pod \"433a9f91-dd8c-4e01-9133-fe5e143bc696\" (UID: \"433a9f91-dd8c-4e01-9133-fe5e143bc696\") " Feb 27 19:17:41 crc kubenswrapper[4981]: I0227 19:17:41.936040 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/433a9f91-dd8c-4e01-9133-fe5e143bc696-combined-ca-bundle\") pod \"433a9f91-dd8c-4e01-9133-fe5e143bc696\" (UID: \"433a9f91-dd8c-4e01-9133-fe5e143bc696\") " Feb 27 19:17:41 crc kubenswrapper[4981]: I0227 19:17:41.936110 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/433a9f91-dd8c-4e01-9133-fe5e143bc696-config\") pod \"433a9f91-dd8c-4e01-9133-fe5e143bc696\" (UID: \"433a9f91-dd8c-4e01-9133-fe5e143bc696\") " Feb 27 19:17:41 crc kubenswrapper[4981]: I0227 19:17:41.944124 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/433a9f91-dd8c-4e01-9133-fe5e143bc696-kube-api-access-grbjq" (OuterVolumeSpecName: "kube-api-access-grbjq") pod "433a9f91-dd8c-4e01-9133-fe5e143bc696" (UID: "433a9f91-dd8c-4e01-9133-fe5e143bc696"). InnerVolumeSpecName "kube-api-access-grbjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:17:41 crc kubenswrapper[4981]: I0227 19:17:41.991679 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/433a9f91-dd8c-4e01-9133-fe5e143bc696-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "433a9f91-dd8c-4e01-9133-fe5e143bc696" (UID: "433a9f91-dd8c-4e01-9133-fe5e143bc696"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:41 crc kubenswrapper[4981]: I0227 19:17:41.994238 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/433a9f91-dd8c-4e01-9133-fe5e143bc696-config" (OuterVolumeSpecName: "config") pod "433a9f91-dd8c-4e01-9133-fe5e143bc696" (UID: "433a9f91-dd8c-4e01-9133-fe5e143bc696"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.038664 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/433a9f91-dd8c-4e01-9133-fe5e143bc696-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.038707 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grbjq\" (UniqueName: \"kubernetes.io/projected/433a9f91-dd8c-4e01-9133-fe5e143bc696-kube-api-access-grbjq\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.038753 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/433a9f91-dd8c-4e01-9133-fe5e143bc696-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.399682 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0aa05f73-e7d2-440b-ab1f-780f23c26272","Type":"ContainerStarted","Data":"3795787eaffa6416278d5f620720e96bed3cebe839428ad34ee4a6b1bbcfb5ed"} Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.406923 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9rclp" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.406962 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9rclp" event={"ID":"433a9f91-dd8c-4e01-9133-fe5e143bc696","Type":"ContainerDied","Data":"2e468d6cc0207af5d70dd5387f901a2a7969efe055dff5c8ada69c2822754457"} Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.414175 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e468d6cc0207af5d70dd5387f901a2a7969efe055dff5c8ada69c2822754457" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.416238 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76944f66-865c-4848-b348-ff9e65e9220b","Type":"ContainerStarted","Data":"d34e3533c4941a1fe3803546e62fa619ae863cd16128b4190b8518edce6f1cb1"} Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.591982 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.591952136 podStartE2EDuration="4.591952136s" podCreationTimestamp="2026-02-27 19:17:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:17:42.425123127 +0000 UTC m=+1961.903904287" watchObservedRunningTime="2026-02-27 19:17:42.591952136 +0000 UTC m=+1962.070733296" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.595218 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vx98x"] Feb 27 19:17:42 crc kubenswrapper[4981]: E0227 19:17:42.595646 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="433a9f91-dd8c-4e01-9133-fe5e143bc696" containerName="neutron-db-sync" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.595665 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="433a9f91-dd8c-4e01-9133-fe5e143bc696" containerName="neutron-db-sync" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.595852 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="433a9f91-dd8c-4e01-9133-fe5e143bc696" containerName="neutron-db-sync" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.596846 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.628702 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vx98x"] Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.698212 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-647d4dfc78-slpwd"] Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.702976 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.708594 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-6sr2n" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.708697 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.708763 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.708817 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.739394 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-647d4dfc78-slpwd"] Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.756689 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-config\") pod \"dnsmasq-dns-5c9776ccc5-vx98x\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.756789 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-vx98x\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.756819 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncxmp\" (UniqueName: \"kubernetes.io/projected/2d817344-b2eb-45f6-a948-0e530172230e-kube-api-access-ncxmp\") pod \"dnsmasq-dns-5c9776ccc5-vx98x\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.756845 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-vx98x\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.756936 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-vx98x\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.756965 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-vx98x\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.860763 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-vx98x\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.860829 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncxmp\" (UniqueName: \"kubernetes.io/projected/2d817344-b2eb-45f6-a948-0e530172230e-kube-api-access-ncxmp\") pod \"dnsmasq-dns-5c9776ccc5-vx98x\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.860874 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w7km\" (UniqueName: \"kubernetes.io/projected/5bf50bd5-6795-47b1-a50a-093079710979-kube-api-access-2w7km\") pod \"neutron-647d4dfc78-slpwd\" (UID: \"5bf50bd5-6795-47b1-a50a-093079710979\") " pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.860908 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-vx98x\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.860977 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-ovndb-tls-certs\") pod \"neutron-647d4dfc78-slpwd\" (UID: \"5bf50bd5-6795-47b1-a50a-093079710979\") " pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.861046 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-combined-ca-bundle\") pod \"neutron-647d4dfc78-slpwd\" (UID: \"5bf50bd5-6795-47b1-a50a-093079710979\") " pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.861111 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-httpd-config\") pod \"neutron-647d4dfc78-slpwd\" (UID: \"5bf50bd5-6795-47b1-a50a-093079710979\") " pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.861175 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-vx98x\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.861292 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-config\") pod \"neutron-647d4dfc78-slpwd\" (UID: \"5bf50bd5-6795-47b1-a50a-093079710979\") " pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.861351 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-vx98x\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.861444 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-config\") pod \"dnsmasq-dns-5c9776ccc5-vx98x\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.862149 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-vx98x\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.862358 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-vx98x\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.862763 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-vx98x\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.862914 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-vx98x\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.863102 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-config\") pod \"dnsmasq-dns-5c9776ccc5-vx98x\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.883482 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncxmp\" (UniqueName: \"kubernetes.io/projected/2d817344-b2eb-45f6-a948-0e530172230e-kube-api-access-ncxmp\") pod \"dnsmasq-dns-5c9776ccc5-vx98x\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.935810 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.964228 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-ovndb-tls-certs\") pod \"neutron-647d4dfc78-slpwd\" (UID: \"5bf50bd5-6795-47b1-a50a-093079710979\") " pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.964306 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-combined-ca-bundle\") pod \"neutron-647d4dfc78-slpwd\" (UID: \"5bf50bd5-6795-47b1-a50a-093079710979\") " pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.964386 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-httpd-config\") pod \"neutron-647d4dfc78-slpwd\" (UID: \"5bf50bd5-6795-47b1-a50a-093079710979\") " pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.964474 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-config\") pod \"neutron-647d4dfc78-slpwd\" (UID: \"5bf50bd5-6795-47b1-a50a-093079710979\") " pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.965316 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w7km\" (UniqueName: \"kubernetes.io/projected/5bf50bd5-6795-47b1-a50a-093079710979-kube-api-access-2w7km\") pod \"neutron-647d4dfc78-slpwd\" (UID: \"5bf50bd5-6795-47b1-a50a-093079710979\") " pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.968984 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-config\") pod \"neutron-647d4dfc78-slpwd\" (UID: \"5bf50bd5-6795-47b1-a50a-093079710979\") " pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.969109 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-combined-ca-bundle\") pod \"neutron-647d4dfc78-slpwd\" (UID: \"5bf50bd5-6795-47b1-a50a-093079710979\") " pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.969865 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-httpd-config\") pod \"neutron-647d4dfc78-slpwd\" (UID: \"5bf50bd5-6795-47b1-a50a-093079710979\") " pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.985191 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-ovndb-tls-certs\") pod \"neutron-647d4dfc78-slpwd\" (UID: \"5bf50bd5-6795-47b1-a50a-093079710979\") " pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:17:42 crc kubenswrapper[4981]: I0227 19:17:42.988923 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w7km\" (UniqueName: \"kubernetes.io/projected/5bf50bd5-6795-47b1-a50a-093079710979-kube-api-access-2w7km\") pod \"neutron-647d4dfc78-slpwd\" (UID: \"5bf50bd5-6795-47b1-a50a-093079710979\") " pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:17:43 crc kubenswrapper[4981]: I0227 19:17:43.028341 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:17:43 crc kubenswrapper[4981]: I0227 19:17:43.428187 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76944f66-865c-4848-b348-ff9e65e9220b","Type":"ContainerStarted","Data":"8ae8785b7965de915c200e8e966c70177271f451899571f9f901ccf0e46514f1"} Feb 27 19:17:43 crc kubenswrapper[4981]: I0227 19:17:43.523532 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vx98x"] Feb 27 19:17:43 crc kubenswrapper[4981]: W0227 19:17:43.531826 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d817344_b2eb_45f6_a948_0e530172230e.slice/crio-3d9d0e47ff79c40b30c6325c775257182e7df297271d554dfdfe6fa50a8c43c8 WatchSource:0}: Error finding container 3d9d0e47ff79c40b30c6325c775257182e7df297271d554dfdfe6fa50a8c43c8: Status 404 returned error can't find the container with id 3d9d0e47ff79c40b30c6325c775257182e7df297271d554dfdfe6fa50a8c43c8 Feb 27 19:17:43 crc kubenswrapper[4981]: I0227 19:17:43.725790 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-647d4dfc78-slpwd"] Feb 27 19:17:43 crc kubenswrapper[4981]: W0227 19:17:43.738883 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bf50bd5_6795_47b1_a50a_093079710979.slice/crio-d500027d02f5524d53f999f81a33375fe6dd76cf932bc1665bf0f07053860e85 WatchSource:0}: Error finding container d500027d02f5524d53f999f81a33375fe6dd76cf932bc1665bf0f07053860e85: Status 404 returned error can't find the container with id d500027d02f5524d53f999f81a33375fe6dd76cf932bc1665bf0f07053860e85 Feb 27 19:17:44 crc kubenswrapper[4981]: I0227 19:17:44.441789 4981 generic.go:334] "Generic (PLEG): container finished" podID="2d817344-b2eb-45f6-a948-0e530172230e" containerID="12c8150f01b22482f259335ce330f0dce75e1c781aaaf7e1683e273b1df60083" exitCode=0 Feb 27 19:17:44 crc kubenswrapper[4981]: I0227 19:17:44.442185 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" event={"ID":"2d817344-b2eb-45f6-a948-0e530172230e","Type":"ContainerDied","Data":"12c8150f01b22482f259335ce330f0dce75e1c781aaaf7e1683e273b1df60083"} Feb 27 19:17:44 crc kubenswrapper[4981]: I0227 19:17:44.442229 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" event={"ID":"2d817344-b2eb-45f6-a948-0e530172230e","Type":"ContainerStarted","Data":"3d9d0e47ff79c40b30c6325c775257182e7df297271d554dfdfe6fa50a8c43c8"} Feb 27 19:17:44 crc kubenswrapper[4981]: I0227 19:17:44.456842 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-647d4dfc78-slpwd" event={"ID":"5bf50bd5-6795-47b1-a50a-093079710979","Type":"ContainerStarted","Data":"ee96d4c90dd898a33ba6e7f046f32c826f786b486a68ad26dfd855005fa5ad42"} Feb 27 19:17:44 crc kubenswrapper[4981]: I0227 19:17:44.456898 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-647d4dfc78-slpwd" event={"ID":"5bf50bd5-6795-47b1-a50a-093079710979","Type":"ContainerStarted","Data":"944770747a40efacbf1c2aef296fcbb221224782752b669f1d4a4e73447ad0b2"} Feb 27 19:17:44 crc kubenswrapper[4981]: I0227 19:17:44.456912 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-647d4dfc78-slpwd" event={"ID":"5bf50bd5-6795-47b1-a50a-093079710979","Type":"ContainerStarted","Data":"d500027d02f5524d53f999f81a33375fe6dd76cf932bc1665bf0f07053860e85"} Feb 27 19:17:44 crc kubenswrapper[4981]: I0227 19:17:44.457847 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:17:44 crc kubenswrapper[4981]: I0227 19:17:44.550605 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-647d4dfc78-slpwd" podStartSLOduration=2.550580079 podStartE2EDuration="2.550580079s" podCreationTimestamp="2026-02-27 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:17:44.498864626 +0000 UTC m=+1963.977645786" watchObservedRunningTime="2026-02-27 19:17:44.550580079 +0000 UTC m=+1964.029361239" Feb 27 19:17:45 crc kubenswrapper[4981]: I0227 19:17:45.033096 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-zxxsl"] Feb 27 19:17:45 crc kubenswrapper[4981]: I0227 19:17:45.043707 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-zxxsl"] Feb 27 19:17:45 crc kubenswrapper[4981]: I0227 19:17:45.059941 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-flk8g"] Feb 27 19:17:45 crc kubenswrapper[4981]: I0227 19:17:45.072126 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e72a-account-create-update-p99q6"] Feb 27 19:17:45 crc kubenswrapper[4981]: I0227 19:17:45.081974 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e72a-account-create-update-p99q6"] Feb 27 19:17:45 crc kubenswrapper[4981]: I0227 19:17:45.090849 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-flk8g"] Feb 27 19:17:45 crc kubenswrapper[4981]: I0227 19:17:45.468817 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76944f66-865c-4848-b348-ff9e65e9220b","Type":"ContainerStarted","Data":"9b4409a15908aff9a5f35bc9691da0757d5b11871ac63d9122e88d7577b03163"} Feb 27 19:17:45 crc kubenswrapper[4981]: I0227 19:17:45.469486 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 19:17:45 crc kubenswrapper[4981]: I0227 19:17:45.471201 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" event={"ID":"2d817344-b2eb-45f6-a948-0e530172230e","Type":"ContainerStarted","Data":"b038edbc7efa55a4e22ca7d81df3fa8f8ff302284f1571f439df9fd91c23ed20"} Feb 27 19:17:45 crc kubenswrapper[4981]: I0227 19:17:45.471490 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:45 crc kubenswrapper[4981]: I0227 19:17:45.517753 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.9200547390000002 podStartE2EDuration="8.517728258s" podCreationTimestamp="2026-02-27 19:17:37 +0000 UTC" firstStartedPulling="2026-02-27 19:17:38.839340822 +0000 UTC m=+1958.318121992" lastFinishedPulling="2026-02-27 19:17:44.437014351 +0000 UTC m=+1963.915795511" observedRunningTime="2026-02-27 19:17:45.51028391 +0000 UTC m=+1964.989065090" watchObservedRunningTime="2026-02-27 19:17:45.517728258 +0000 UTC m=+1964.996509408" Feb 27 19:17:45 crc kubenswrapper[4981]: I0227 19:17:45.545219 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" podStartSLOduration=3.545194619 podStartE2EDuration="3.545194619s" podCreationTimestamp="2026-02-27 19:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:17:45.537172984 +0000 UTC m=+1965.015954144" watchObservedRunningTime="2026-02-27 19:17:45.545194619 +0000 UTC m=+1965.023975779" Feb 27 19:17:45 crc kubenswrapper[4981]: I0227 19:17:45.641637 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40cbb61c-1aa8-4477-8579-76699afce28b" path="/var/lib/kubelet/pods/40cbb61c-1aa8-4477-8579-76699afce28b/volumes" Feb 27 19:17:45 crc kubenswrapper[4981]: I0227 19:17:45.642363 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="619b6b33-ff2d-4b2c-984e-f1c65bfcdffd" path="/var/lib/kubelet/pods/619b6b33-ff2d-4b2c-984e-f1c65bfcdffd/volumes" Feb 27 19:17:45 crc kubenswrapper[4981]: I0227 19:17:45.643002 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f541ace2-11c0-4232-b34b-d7079bfc597b" path="/var/lib/kubelet/pods/f541ace2-11c0-4232-b34b-d7079bfc597b/volumes" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.599634 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5b6bf89d9-5xrv6"] Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.603304 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.611088 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.612175 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.616494 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b6bf89d9-5xrv6"] Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.760334 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-combined-ca-bundle\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.760410 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-internal-tls-certs\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.760785 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-ovndb-tls-certs\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.760885 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-httpd-config\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.761162 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ksqs\" (UniqueName: \"kubernetes.io/projected/e691b557-a141-44b1-a2c7-4ba36af55a15-kube-api-access-4ksqs\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.761325 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-config\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.761676 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-public-tls-certs\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.864335 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-config\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.864425 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-public-tls-certs\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.864490 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-combined-ca-bundle\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.864529 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-internal-tls-certs\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.864604 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-ovndb-tls-certs\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.864631 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-httpd-config\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.864700 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ksqs\" (UniqueName: \"kubernetes.io/projected/e691b557-a141-44b1-a2c7-4ba36af55a15-kube-api-access-4ksqs\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.879313 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-httpd-config\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.879784 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-config\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.880380 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-internal-tls-certs\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.880542 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-ovndb-tls-certs\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.881474 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-combined-ca-bundle\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.892721 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-public-tls-certs\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.894043 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ksqs\" (UniqueName: \"kubernetes.io/projected/e691b557-a141-44b1-a2c7-4ba36af55a15-kube-api-access-4ksqs\") pod \"neutron-5b6bf89d9-5xrv6\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:46 crc kubenswrapper[4981]: I0227 19:17:46.949571 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:47 crc kubenswrapper[4981]: I0227 19:17:47.550037 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b6bf89d9-5xrv6"] Feb 27 19:17:47 crc kubenswrapper[4981]: W0227 19:17:47.567961 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode691b557_a141_44b1_a2c7_4ba36af55a15.slice/crio-12696eef6d50d712a7f82b80de1f34c1316cd108dc41496c8e440faf453f4db1 WatchSource:0}: Error finding container 12696eef6d50d712a7f82b80de1f34c1316cd108dc41496c8e440faf453f4db1: Status 404 returned error can't find the container with id 12696eef6d50d712a7f82b80de1f34c1316cd108dc41496c8e440faf453f4db1 Feb 27 19:17:47 crc kubenswrapper[4981]: I0227 19:17:47.737124 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 19:17:47 crc kubenswrapper[4981]: I0227 19:17:47.739286 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 19:17:47 crc kubenswrapper[4981]: I0227 19:17:47.785499 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 19:17:47 crc kubenswrapper[4981]: I0227 19:17:47.796654 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 19:17:48 crc kubenswrapper[4981]: I0227 19:17:48.499530 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b6bf89d9-5xrv6" event={"ID":"e691b557-a141-44b1-a2c7-4ba36af55a15","Type":"ContainerStarted","Data":"05eafe4f692fe809c80310522b3ee1e9042aee13aa572f6f81438b46b0174a5c"} Feb 27 19:17:48 crc kubenswrapper[4981]: I0227 19:17:48.500032 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 19:17:48 crc kubenswrapper[4981]: I0227 19:17:48.500086 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b6bf89d9-5xrv6" event={"ID":"e691b557-a141-44b1-a2c7-4ba36af55a15","Type":"ContainerStarted","Data":"12696eef6d50d712a7f82b80de1f34c1316cd108dc41496c8e440faf453f4db1"} Feb 27 19:17:48 crc kubenswrapper[4981]: I0227 19:17:48.500101 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 19:17:48 crc kubenswrapper[4981]: I0227 19:17:48.994746 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:48 crc kubenswrapper[4981]: I0227 19:17:48.995191 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:49 crc kubenswrapper[4981]: I0227 19:17:49.037791 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:49 crc kubenswrapper[4981]: I0227 19:17:49.048597 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:49 crc kubenswrapper[4981]: I0227 19:17:49.509636 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b6bf89d9-5xrv6" event={"ID":"e691b557-a141-44b1-a2c7-4ba36af55a15","Type":"ContainerStarted","Data":"c280c1755db22cca5a1d60b0780818610aff15154fcb422c9167f6737e22b6d6"} Feb 27 19:17:49 crc kubenswrapper[4981]: I0227 19:17:49.510158 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:49 crc kubenswrapper[4981]: I0227 19:17:49.510494 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:49 crc kubenswrapper[4981]: I0227 19:17:49.549648 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5b6bf89d9-5xrv6" podStartSLOduration=3.549626975 podStartE2EDuration="3.549626975s" podCreationTimestamp="2026-02-27 19:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:17:49.546683436 +0000 UTC m=+1969.025464596" watchObservedRunningTime="2026-02-27 19:17:49.549626975 +0000 UTC m=+1969.028408135" Feb 27 19:17:50 crc kubenswrapper[4981]: I0227 19:17:50.199453 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:17:50 crc kubenswrapper[4981]: I0227 19:17:50.199824 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76944f66-865c-4848-b348-ff9e65e9220b" containerName="ceilometer-central-agent" containerID="cri-o://d677d3ff8573097e4490affce078550fedf68a838cdae9db86de154340c89d08" gracePeriod=30 Feb 27 19:17:50 crc kubenswrapper[4981]: I0227 19:17:50.200492 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76944f66-865c-4848-b348-ff9e65e9220b" containerName="proxy-httpd" containerID="cri-o://9b4409a15908aff9a5f35bc9691da0757d5b11871ac63d9122e88d7577b03163" gracePeriod=30 Feb 27 19:17:50 crc kubenswrapper[4981]: I0227 19:17:50.200557 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76944f66-865c-4848-b348-ff9e65e9220b" containerName="sg-core" containerID="cri-o://8ae8785b7965de915c200e8e966c70177271f451899571f9f901ccf0e46514f1" gracePeriod=30 Feb 27 19:17:50 crc kubenswrapper[4981]: I0227 19:17:50.200612 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76944f66-865c-4848-b348-ff9e65e9220b" containerName="ceilometer-notification-agent" containerID="cri-o://d34e3533c4941a1fe3803546e62fa619ae863cd16128b4190b8518edce6f1cb1" gracePeriod=30 Feb 27 19:17:50 crc kubenswrapper[4981]: I0227 19:17:50.522442 4981 generic.go:334] "Generic (PLEG): container finished" podID="76944f66-865c-4848-b348-ff9e65e9220b" containerID="9b4409a15908aff9a5f35bc9691da0757d5b11871ac63d9122e88d7577b03163" exitCode=0 Feb 27 19:17:50 crc kubenswrapper[4981]: I0227 19:17:50.522487 4981 generic.go:334] "Generic (PLEG): container finished" podID="76944f66-865c-4848-b348-ff9e65e9220b" containerID="8ae8785b7965de915c200e8e966c70177271f451899571f9f901ccf0e46514f1" exitCode=2 Feb 27 19:17:50 crc kubenswrapper[4981]: I0227 19:17:50.522514 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76944f66-865c-4848-b348-ff9e65e9220b","Type":"ContainerDied","Data":"9b4409a15908aff9a5f35bc9691da0757d5b11871ac63d9122e88d7577b03163"} Feb 27 19:17:50 crc kubenswrapper[4981]: I0227 19:17:50.522567 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76944f66-865c-4848-b348-ff9e65e9220b","Type":"ContainerDied","Data":"8ae8785b7965de915c200e8e966c70177271f451899571f9f901ccf0e46514f1"} Feb 27 19:17:50 crc kubenswrapper[4981]: I0227 19:17:50.523222 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:17:51 crc kubenswrapper[4981]: I0227 19:17:51.136814 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 19:17:51 crc kubenswrapper[4981]: I0227 19:17:51.137000 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 19:17:51 crc kubenswrapper[4981]: I0227 19:17:51.537810 4981 generic.go:334] "Generic (PLEG): container finished" podID="76944f66-865c-4848-b348-ff9e65e9220b" containerID="d34e3533c4941a1fe3803546e62fa619ae863cd16128b4190b8518edce6f1cb1" exitCode=0 Feb 27 19:17:51 crc kubenswrapper[4981]: I0227 19:17:51.538237 4981 generic.go:334] "Generic (PLEG): container finished" podID="76944f66-865c-4848-b348-ff9e65e9220b" containerID="d677d3ff8573097e4490affce078550fedf68a838cdae9db86de154340c89d08" exitCode=0 Feb 27 19:17:51 crc kubenswrapper[4981]: I0227 19:17:51.539399 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76944f66-865c-4848-b348-ff9e65e9220b","Type":"ContainerDied","Data":"d34e3533c4941a1fe3803546e62fa619ae863cd16128b4190b8518edce6f1cb1"} Feb 27 19:17:51 crc kubenswrapper[4981]: I0227 19:17:51.539442 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76944f66-865c-4848-b348-ff9e65e9220b","Type":"ContainerDied","Data":"d677d3ff8573097e4490affce078550fedf68a838cdae9db86de154340c89d08"} Feb 27 19:17:51 crc kubenswrapper[4981]: I0227 19:17:51.645937 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:51 crc kubenswrapper[4981]: I0227 19:17:51.646380 4981 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 19:17:51 crc kubenswrapper[4981]: I0227 19:17:51.650533 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.294201 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.380366 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76944f66-865c-4848-b348-ff9e65e9220b-run-httpd\") pod \"76944f66-865c-4848-b348-ff9e65e9220b\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.380459 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-combined-ca-bundle\") pod \"76944f66-865c-4848-b348-ff9e65e9220b\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.380488 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-config-data\") pod \"76944f66-865c-4848-b348-ff9e65e9220b\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.380555 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-sg-core-conf-yaml\") pod \"76944f66-865c-4848-b348-ff9e65e9220b\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.380598 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pld4q\" (UniqueName: \"kubernetes.io/projected/76944f66-865c-4848-b348-ff9e65e9220b-kube-api-access-pld4q\") pod \"76944f66-865c-4848-b348-ff9e65e9220b\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.380648 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-scripts\") pod \"76944f66-865c-4848-b348-ff9e65e9220b\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.380670 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76944f66-865c-4848-b348-ff9e65e9220b-log-httpd\") pod \"76944f66-865c-4848-b348-ff9e65e9220b\" (UID: \"76944f66-865c-4848-b348-ff9e65e9220b\") " Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.380784 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76944f66-865c-4848-b348-ff9e65e9220b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "76944f66-865c-4848-b348-ff9e65e9220b" (UID: "76944f66-865c-4848-b348-ff9e65e9220b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.381362 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76944f66-865c-4848-b348-ff9e65e9220b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "76944f66-865c-4848-b348-ff9e65e9220b" (UID: "76944f66-865c-4848-b348-ff9e65e9220b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.381869 4981 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76944f66-865c-4848-b348-ff9e65e9220b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.381891 4981 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76944f66-865c-4848-b348-ff9e65e9220b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.397327 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76944f66-865c-4848-b348-ff9e65e9220b-kube-api-access-pld4q" (OuterVolumeSpecName: "kube-api-access-pld4q") pod "76944f66-865c-4848-b348-ff9e65e9220b" (UID: "76944f66-865c-4848-b348-ff9e65e9220b"). InnerVolumeSpecName "kube-api-access-pld4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.399311 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-scripts" (OuterVolumeSpecName: "scripts") pod "76944f66-865c-4848-b348-ff9e65e9220b" (UID: "76944f66-865c-4848-b348-ff9e65e9220b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.440722 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "76944f66-865c-4848-b348-ff9e65e9220b" (UID: "76944f66-865c-4848-b348-ff9e65e9220b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.483335 4981 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.483373 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pld4q\" (UniqueName: \"kubernetes.io/projected/76944f66-865c-4848-b348-ff9e65e9220b-kube-api-access-pld4q\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.483385 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.509019 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76944f66-865c-4848-b348-ff9e65e9220b" (UID: "76944f66-865c-4848-b348-ff9e65e9220b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.528256 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-config-data" (OuterVolumeSpecName: "config-data") pod "76944f66-865c-4848-b348-ff9e65e9220b" (UID: "76944f66-865c-4848-b348-ff9e65e9220b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.549729 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.549778 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76944f66-865c-4848-b348-ff9e65e9220b","Type":"ContainerDied","Data":"2bc2ca4146c1dba99d8688f2e804ad240aabd5fefe566c47fc544ab053f0fc19"} Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.549814 4981 scope.go:117] "RemoveContainer" containerID="9b4409a15908aff9a5f35bc9691da0757d5b11871ac63d9122e88d7577b03163" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.575963 4981 scope.go:117] "RemoveContainer" containerID="8ae8785b7965de915c200e8e966c70177271f451899571f9f901ccf0e46514f1" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.588525 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.588582 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76944f66-865c-4848-b348-ff9e65e9220b-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.589296 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.596670 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.609036 4981 scope.go:117] "RemoveContainer" containerID="d34e3533c4941a1fe3803546e62fa619ae863cd16128b4190b8518edce6f1cb1" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.610401 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:17:52 crc kubenswrapper[4981]: E0227 19:17:52.610889 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76944f66-865c-4848-b348-ff9e65e9220b" containerName="ceilometer-notification-agent" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.610967 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="76944f66-865c-4848-b348-ff9e65e9220b" containerName="ceilometer-notification-agent" Feb 27 19:17:52 crc kubenswrapper[4981]: E0227 19:17:52.611929 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76944f66-865c-4848-b348-ff9e65e9220b" containerName="sg-core" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.612026 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="76944f66-865c-4848-b348-ff9e65e9220b" containerName="sg-core" Feb 27 19:17:52 crc kubenswrapper[4981]: E0227 19:17:52.612237 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76944f66-865c-4848-b348-ff9e65e9220b" containerName="proxy-httpd" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.612305 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="76944f66-865c-4848-b348-ff9e65e9220b" containerName="proxy-httpd" Feb 27 19:17:52 crc kubenswrapper[4981]: E0227 19:17:52.612392 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76944f66-865c-4848-b348-ff9e65e9220b" containerName="ceilometer-central-agent" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.612444 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="76944f66-865c-4848-b348-ff9e65e9220b" containerName="ceilometer-central-agent" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.612725 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="76944f66-865c-4848-b348-ff9e65e9220b" containerName="sg-core" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.612790 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="76944f66-865c-4848-b348-ff9e65e9220b" containerName="proxy-httpd" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.612858 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="76944f66-865c-4848-b348-ff9e65e9220b" containerName="ceilometer-notification-agent" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.612913 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="76944f66-865c-4848-b348-ff9e65e9220b" containerName="ceilometer-central-agent" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.614584 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.617758 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.617997 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.625176 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.650011 4981 scope.go:117] "RemoveContainer" containerID="d677d3ff8573097e4490affce078550fedf68a838cdae9db86de154340c89d08" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.690590 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.690672 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e024b24e-321b-40c7-9574-40685c9a7ac9-log-httpd\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.690704 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e024b24e-321b-40c7-9574-40685c9a7ac9-run-httpd\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.690822 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-config-data\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.691006 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-scripts\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.692800 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.692867 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbk72\" (UniqueName: \"kubernetes.io/projected/e024b24e-321b-40c7-9574-40685c9a7ac9-kube-api-access-lbk72\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.794696 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-config-data\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.794849 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-scripts\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.794892 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.794928 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbk72\" (UniqueName: \"kubernetes.io/projected/e024b24e-321b-40c7-9574-40685c9a7ac9-kube-api-access-lbk72\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.794968 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.795004 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e024b24e-321b-40c7-9574-40685c9a7ac9-log-httpd\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.795033 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e024b24e-321b-40c7-9574-40685c9a7ac9-run-httpd\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.795606 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e024b24e-321b-40c7-9574-40685c9a7ac9-run-httpd\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.796016 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e024b24e-321b-40c7-9574-40685c9a7ac9-log-httpd\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.800362 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-config-data\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.801093 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-scripts\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.801108 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.812933 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.816711 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbk72\" (UniqueName: \"kubernetes.io/projected/e024b24e-321b-40c7-9574-40685c9a7ac9-kube-api-access-lbk72\") pod \"ceilometer-0\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.936650 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:17:52 crc kubenswrapper[4981]: I0227 19:17:52.937196 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:17:53 crc kubenswrapper[4981]: I0227 19:17:53.006981 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-x9cmq"] Feb 27 19:17:53 crc kubenswrapper[4981]: I0227 19:17:53.007330 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" podUID="41cb346d-f755-45dd-bb9d-3a972eada0b0" containerName="dnsmasq-dns" containerID="cri-o://620caf60abe754384b34a3830b884ec113aaa713430f21d66c8fceead31fb1f9" gracePeriod=10 Feb 27 19:17:53 crc kubenswrapper[4981]: I0227 19:17:53.490119 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:17:53 crc kubenswrapper[4981]: W0227 19:17:53.497285 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode024b24e_321b_40c7_9574_40685c9a7ac9.slice/crio-b62d72127900622fe504f50ee5f352474c6a5c3a9dec412ef76d7a9968f9af4d WatchSource:0}: Error finding container b62d72127900622fe504f50ee5f352474c6a5c3a9dec412ef76d7a9968f9af4d: Status 404 returned error can't find the container with id b62d72127900622fe504f50ee5f352474c6a5c3a9dec412ef76d7a9968f9af4d Feb 27 19:17:53 crc kubenswrapper[4981]: I0227 19:17:53.561170 4981 generic.go:334] "Generic (PLEG): container finished" podID="41cb346d-f755-45dd-bb9d-3a972eada0b0" containerID="620caf60abe754384b34a3830b884ec113aaa713430f21d66c8fceead31fb1f9" exitCode=0 Feb 27 19:17:53 crc kubenswrapper[4981]: I0227 19:17:53.561247 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" event={"ID":"41cb346d-f755-45dd-bb9d-3a972eada0b0","Type":"ContainerDied","Data":"620caf60abe754384b34a3830b884ec113aaa713430f21d66c8fceead31fb1f9"} Feb 27 19:17:53 crc kubenswrapper[4981]: I0227 19:17:53.563496 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e024b24e-321b-40c7-9574-40685c9a7ac9","Type":"ContainerStarted","Data":"b62d72127900622fe504f50ee5f352474c6a5c3a9dec412ef76d7a9968f9af4d"} Feb 27 19:17:53 crc kubenswrapper[4981]: I0227 19:17:53.638370 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76944f66-865c-4848-b348-ff9e65e9220b" path="/var/lib/kubelet/pods/76944f66-865c-4848-b348-ff9e65e9220b/volumes" Feb 27 19:17:53 crc kubenswrapper[4981]: I0227 19:17:53.695299 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.220260 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.328831 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-ovsdbserver-sb\") pod \"41cb346d-f755-45dd-bb9d-3a972eada0b0\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.328886 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-dns-swift-storage-0\") pod \"41cb346d-f755-45dd-bb9d-3a972eada0b0\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.328974 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px2m2\" (UniqueName: \"kubernetes.io/projected/41cb346d-f755-45dd-bb9d-3a972eada0b0-kube-api-access-px2m2\") pod \"41cb346d-f755-45dd-bb9d-3a972eada0b0\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.328997 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-ovsdbserver-nb\") pod \"41cb346d-f755-45dd-bb9d-3a972eada0b0\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.330403 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-config\") pod \"41cb346d-f755-45dd-bb9d-3a972eada0b0\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.330515 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-dns-svc\") pod \"41cb346d-f755-45dd-bb9d-3a972eada0b0\" (UID: \"41cb346d-f755-45dd-bb9d-3a972eada0b0\") " Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.357983 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41cb346d-f755-45dd-bb9d-3a972eada0b0-kube-api-access-px2m2" (OuterVolumeSpecName: "kube-api-access-px2m2") pod "41cb346d-f755-45dd-bb9d-3a972eada0b0" (UID: "41cb346d-f755-45dd-bb9d-3a972eada0b0"). InnerVolumeSpecName "kube-api-access-px2m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.377092 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "41cb346d-f755-45dd-bb9d-3a972eada0b0" (UID: "41cb346d-f755-45dd-bb9d-3a972eada0b0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.385857 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "41cb346d-f755-45dd-bb9d-3a972eada0b0" (UID: "41cb346d-f755-45dd-bb9d-3a972eada0b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.397524 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "41cb346d-f755-45dd-bb9d-3a972eada0b0" (UID: "41cb346d-f755-45dd-bb9d-3a972eada0b0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.399198 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "41cb346d-f755-45dd-bb9d-3a972eada0b0" (UID: "41cb346d-f755-45dd-bb9d-3a972eada0b0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.408628 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-config" (OuterVolumeSpecName: "config") pod "41cb346d-f755-45dd-bb9d-3a972eada0b0" (UID: "41cb346d-f755-45dd-bb9d-3a972eada0b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.434016 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.434246 4981 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.434345 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px2m2\" (UniqueName: \"kubernetes.io/projected/41cb346d-f755-45dd-bb9d-3a972eada0b0-kube-api-access-px2m2\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.434403 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.434465 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.434520 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41cb346d-f755-45dd-bb9d-3a972eada0b0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.576186 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" event={"ID":"41cb346d-f755-45dd-bb9d-3a972eada0b0","Type":"ContainerDied","Data":"fab8e76902461b61391d56a0246e0d10dbc41c912795c49d3d0de5d9cae1bdc3"} Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.576274 4981 scope.go:117] "RemoveContainer" containerID="620caf60abe754384b34a3830b884ec113aaa713430f21d66c8fceead31fb1f9" Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.577000 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-x9cmq" Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.603297 4981 scope.go:117] "RemoveContainer" containerID="855f58196bfd9a19e38237c0925f91032b05a5e57aa28071294b7bf59ecdbfdb" Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.613724 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-x9cmq"] Feb 27 19:17:54 crc kubenswrapper[4981]: I0227 19:17:54.620452 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-x9cmq"] Feb 27 19:17:55 crc kubenswrapper[4981]: I0227 19:17:55.642151 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41cb346d-f755-45dd-bb9d-3a972eada0b0" path="/var/lib/kubelet/pods/41cb346d-f755-45dd-bb9d-3a972eada0b0/volumes" Feb 27 19:17:56 crc kubenswrapper[4981]: I0227 19:17:56.597529 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e024b24e-321b-40c7-9574-40685c9a7ac9","Type":"ContainerStarted","Data":"fe4d141d3c5312a65aaf2b26c50af0f91db5bf0c12532e893b1f41e3d4438ff0"} Feb 27 19:17:57 crc kubenswrapper[4981]: I0227 19:17:57.609344 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e024b24e-321b-40c7-9574-40685c9a7ac9","Type":"ContainerStarted","Data":"59b9366e6b5f24334ef4e39a69d9cc4f63a6f03301b5edb8a93e375000e378f9"} Feb 27 19:17:58 crc kubenswrapper[4981]: I0227 19:17:58.621663 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e024b24e-321b-40c7-9574-40685c9a7ac9","Type":"ContainerStarted","Data":"7e2a04cb6d63eef52af5c27025fcf24005bc7fd2e96846421f51443dbf2b295e"} Feb 27 19:18:00 crc kubenswrapper[4981]: I0227 19:18:00.148893 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536998-4jhhd"] Feb 27 19:18:00 crc kubenswrapper[4981]: E0227 19:18:00.149607 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cb346d-f755-45dd-bb9d-3a972eada0b0" containerName="init" Feb 27 19:18:00 crc kubenswrapper[4981]: I0227 19:18:00.149620 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cb346d-f755-45dd-bb9d-3a972eada0b0" containerName="init" Feb 27 19:18:00 crc kubenswrapper[4981]: E0227 19:18:00.149646 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cb346d-f755-45dd-bb9d-3a972eada0b0" containerName="dnsmasq-dns" Feb 27 19:18:00 crc kubenswrapper[4981]: I0227 19:18:00.149653 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cb346d-f755-45dd-bb9d-3a972eada0b0" containerName="dnsmasq-dns" Feb 27 19:18:00 crc kubenswrapper[4981]: I0227 19:18:00.149850 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="41cb346d-f755-45dd-bb9d-3a972eada0b0" containerName="dnsmasq-dns" Feb 27 19:18:00 crc kubenswrapper[4981]: I0227 19:18:00.150495 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536998-4jhhd" Feb 27 19:18:00 crc kubenswrapper[4981]: I0227 19:18:00.156566 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:18:00 crc kubenswrapper[4981]: I0227 19:18:00.156699 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:18:00 crc kubenswrapper[4981]: I0227 19:18:00.156711 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:18:00 crc kubenswrapper[4981]: I0227 19:18:00.168430 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536998-4jhhd"] Feb 27 19:18:00 crc kubenswrapper[4981]: I0227 19:18:00.300214 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl852\" (UniqueName: \"kubernetes.io/projected/155a2ade-2146-4eb4-8f2a-956e1dbbc1c9-kube-api-access-nl852\") pod \"auto-csr-approver-29536998-4jhhd\" (UID: \"155a2ade-2146-4eb4-8f2a-956e1dbbc1c9\") " pod="openshift-infra/auto-csr-approver-29536998-4jhhd" Feb 27 19:18:00 crc kubenswrapper[4981]: I0227 19:18:00.403282 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl852\" (UniqueName: \"kubernetes.io/projected/155a2ade-2146-4eb4-8f2a-956e1dbbc1c9-kube-api-access-nl852\") pod \"auto-csr-approver-29536998-4jhhd\" (UID: \"155a2ade-2146-4eb4-8f2a-956e1dbbc1c9\") " pod="openshift-infra/auto-csr-approver-29536998-4jhhd" Feb 27 19:18:00 crc kubenswrapper[4981]: I0227 19:18:00.425621 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl852\" (UniqueName: \"kubernetes.io/projected/155a2ade-2146-4eb4-8f2a-956e1dbbc1c9-kube-api-access-nl852\") pod \"auto-csr-approver-29536998-4jhhd\" (UID: \"155a2ade-2146-4eb4-8f2a-956e1dbbc1c9\") " pod="openshift-infra/auto-csr-approver-29536998-4jhhd" Feb 27 19:18:00 crc kubenswrapper[4981]: I0227 19:18:00.472791 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536998-4jhhd" Feb 27 19:18:00 crc kubenswrapper[4981]: I0227 19:18:00.963449 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536998-4jhhd"] Feb 27 19:18:01 crc kubenswrapper[4981]: I0227 19:18:01.664496 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536998-4jhhd" event={"ID":"155a2ade-2146-4eb4-8f2a-956e1dbbc1c9","Type":"ContainerStarted","Data":"cd94943feb5c03463e53a3908fdfeff328f58be7181ec0cf58d5d10cecabb95a"} Feb 27 19:18:02 crc kubenswrapper[4981]: I0227 19:18:02.676687 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536998-4jhhd" event={"ID":"155a2ade-2146-4eb4-8f2a-956e1dbbc1c9","Type":"ContainerStarted","Data":"fb14701431afb434dd121e9dd7a42dbd8daf25a43721f12ab7a46f62783db48a"} Feb 27 19:18:02 crc kubenswrapper[4981]: I0227 19:18:02.679126 4981 generic.go:334] "Generic (PLEG): container finished" podID="3b4a8933-c57c-4c72-ba77-e6b637a282ee" containerID="ad751a2a3b8f1441777f905245851d231f1a02d49eb9a9ac2a1fa328f8c6d264" exitCode=0 Feb 27 19:18:02 crc kubenswrapper[4981]: I0227 19:18:02.679189 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-clm2b" event={"ID":"3b4a8933-c57c-4c72-ba77-e6b637a282ee","Type":"ContainerDied","Data":"ad751a2a3b8f1441777f905245851d231f1a02d49eb9a9ac2a1fa328f8c6d264"} Feb 27 19:18:02 crc kubenswrapper[4981]: I0227 19:18:02.682349 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e024b24e-321b-40c7-9574-40685c9a7ac9","Type":"ContainerStarted","Data":"01049aa2957dcaffbd9d432affba049ea58cfaa69ef3666b4d3e0477c14bde34"} Feb 27 19:18:02 crc kubenswrapper[4981]: I0227 19:18:02.682503 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e024b24e-321b-40c7-9574-40685c9a7ac9" containerName="ceilometer-central-agent" containerID="cri-o://fe4d141d3c5312a65aaf2b26c50af0f91db5bf0c12532e893b1f41e3d4438ff0" gracePeriod=30 Feb 27 19:18:02 crc kubenswrapper[4981]: I0227 19:18:02.682551 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 19:18:02 crc kubenswrapper[4981]: I0227 19:18:02.682605 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e024b24e-321b-40c7-9574-40685c9a7ac9" containerName="sg-core" containerID="cri-o://7e2a04cb6d63eef52af5c27025fcf24005bc7fd2e96846421f51443dbf2b295e" gracePeriod=30 Feb 27 19:18:02 crc kubenswrapper[4981]: I0227 19:18:02.682606 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e024b24e-321b-40c7-9574-40685c9a7ac9" containerName="ceilometer-notification-agent" containerID="cri-o://59b9366e6b5f24334ef4e39a69d9cc4f63a6f03301b5edb8a93e375000e378f9" gracePeriod=30 Feb 27 19:18:02 crc kubenswrapper[4981]: I0227 19:18:02.682700 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e024b24e-321b-40c7-9574-40685c9a7ac9" containerName="proxy-httpd" containerID="cri-o://01049aa2957dcaffbd9d432affba049ea58cfaa69ef3666b4d3e0477c14bde34" gracePeriod=30 Feb 27 19:18:02 crc kubenswrapper[4981]: I0227 19:18:02.706419 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536998-4jhhd" podStartSLOduration=1.435581984 podStartE2EDuration="2.706397653s" podCreationTimestamp="2026-02-27 19:18:00 +0000 UTC" firstStartedPulling="2026-02-27 19:18:00.963490416 +0000 UTC m=+1980.442271576" lastFinishedPulling="2026-02-27 19:18:02.234306085 +0000 UTC m=+1981.713087245" observedRunningTime="2026-02-27 19:18:02.693654462 +0000 UTC m=+1982.172435622" watchObservedRunningTime="2026-02-27 19:18:02.706397653 +0000 UTC m=+1982.185178813" Feb 27 19:18:02 crc kubenswrapper[4981]: I0227 19:18:02.737166 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.805840117 podStartE2EDuration="10.737135834s" podCreationTimestamp="2026-02-27 19:17:52 +0000 UTC" firstStartedPulling="2026-02-27 19:17:53.499563033 +0000 UTC m=+1972.978344183" lastFinishedPulling="2026-02-27 19:18:01.43085874 +0000 UTC m=+1980.909639900" observedRunningTime="2026-02-27 19:18:02.724660972 +0000 UTC m=+1982.203442132" watchObservedRunningTime="2026-02-27 19:18:02.737135834 +0000 UTC m=+1982.215916994" Feb 27 19:18:03 crc kubenswrapper[4981]: I0227 19:18:03.694791 4981 generic.go:334] "Generic (PLEG): container finished" podID="155a2ade-2146-4eb4-8f2a-956e1dbbc1c9" containerID="fb14701431afb434dd121e9dd7a42dbd8daf25a43721f12ab7a46f62783db48a" exitCode=0 Feb 27 19:18:03 crc kubenswrapper[4981]: I0227 19:18:03.695287 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536998-4jhhd" event={"ID":"155a2ade-2146-4eb4-8f2a-956e1dbbc1c9","Type":"ContainerDied","Data":"fb14701431afb434dd121e9dd7a42dbd8daf25a43721f12ab7a46f62783db48a"} Feb 27 19:18:03 crc kubenswrapper[4981]: I0227 19:18:03.699356 4981 generic.go:334] "Generic (PLEG): container finished" podID="e024b24e-321b-40c7-9574-40685c9a7ac9" containerID="01049aa2957dcaffbd9d432affba049ea58cfaa69ef3666b4d3e0477c14bde34" exitCode=0 Feb 27 19:18:03 crc kubenswrapper[4981]: I0227 19:18:03.699381 4981 generic.go:334] "Generic (PLEG): container finished" podID="e024b24e-321b-40c7-9574-40685c9a7ac9" containerID="7e2a04cb6d63eef52af5c27025fcf24005bc7fd2e96846421f51443dbf2b295e" exitCode=2 Feb 27 19:18:03 crc kubenswrapper[4981]: I0227 19:18:03.699392 4981 generic.go:334] "Generic (PLEG): container finished" podID="e024b24e-321b-40c7-9574-40685c9a7ac9" containerID="59b9366e6b5f24334ef4e39a69d9cc4f63a6f03301b5edb8a93e375000e378f9" exitCode=0 Feb 27 19:18:03 crc kubenswrapper[4981]: I0227 19:18:03.699528 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e024b24e-321b-40c7-9574-40685c9a7ac9","Type":"ContainerDied","Data":"01049aa2957dcaffbd9d432affba049ea58cfaa69ef3666b4d3e0477c14bde34"} Feb 27 19:18:03 crc kubenswrapper[4981]: I0227 19:18:03.699627 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e024b24e-321b-40c7-9574-40685c9a7ac9","Type":"ContainerDied","Data":"7e2a04cb6d63eef52af5c27025fcf24005bc7fd2e96846421f51443dbf2b295e"} Feb 27 19:18:03 crc kubenswrapper[4981]: I0227 19:18:03.699652 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e024b24e-321b-40c7-9574-40685c9a7ac9","Type":"ContainerDied","Data":"59b9366e6b5f24334ef4e39a69d9cc4f63a6f03301b5edb8a93e375000e378f9"} Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.110965 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-clm2b" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.193273 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.282281 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-config-data\") pod \"e024b24e-321b-40c7-9574-40685c9a7ac9\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.282397 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbk72\" (UniqueName: \"kubernetes.io/projected/e024b24e-321b-40c7-9574-40685c9a7ac9-kube-api-access-lbk72\") pod \"e024b24e-321b-40c7-9574-40685c9a7ac9\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.282427 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-sg-core-conf-yaml\") pod \"e024b24e-321b-40c7-9574-40685c9a7ac9\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.282532 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e024b24e-321b-40c7-9574-40685c9a7ac9-run-httpd\") pod \"e024b24e-321b-40c7-9574-40685c9a7ac9\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.282608 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b4a8933-c57c-4c72-ba77-e6b637a282ee-scripts\") pod \"3b4a8933-c57c-4c72-ba77-e6b637a282ee\" (UID: \"3b4a8933-c57c-4c72-ba77-e6b637a282ee\") " Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.282666 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e024b24e-321b-40c7-9574-40685c9a7ac9-log-httpd\") pod \"e024b24e-321b-40c7-9574-40685c9a7ac9\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.282717 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b4a8933-c57c-4c72-ba77-e6b637a282ee-config-data\") pod \"3b4a8933-c57c-4c72-ba77-e6b637a282ee\" (UID: \"3b4a8933-c57c-4c72-ba77-e6b637a282ee\") " Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.282742 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-scripts\") pod \"e024b24e-321b-40c7-9574-40685c9a7ac9\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.282782 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4a8933-c57c-4c72-ba77-e6b637a282ee-combined-ca-bundle\") pod \"3b4a8933-c57c-4c72-ba77-e6b637a282ee\" (UID: \"3b4a8933-c57c-4c72-ba77-e6b637a282ee\") " Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.282805 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5qcg\" (UniqueName: \"kubernetes.io/projected/3b4a8933-c57c-4c72-ba77-e6b637a282ee-kube-api-access-g5qcg\") pod \"3b4a8933-c57c-4c72-ba77-e6b637a282ee\" (UID: \"3b4a8933-c57c-4c72-ba77-e6b637a282ee\") " Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.282822 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-combined-ca-bundle\") pod \"e024b24e-321b-40c7-9574-40685c9a7ac9\" (UID: \"e024b24e-321b-40c7-9574-40685c9a7ac9\") " Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.283957 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e024b24e-321b-40c7-9574-40685c9a7ac9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e024b24e-321b-40c7-9574-40685c9a7ac9" (UID: "e024b24e-321b-40c7-9574-40685c9a7ac9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.285669 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e024b24e-321b-40c7-9574-40685c9a7ac9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e024b24e-321b-40c7-9574-40685c9a7ac9" (UID: "e024b24e-321b-40c7-9574-40685c9a7ac9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.291377 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-scripts" (OuterVolumeSpecName: "scripts") pod "e024b24e-321b-40c7-9574-40685c9a7ac9" (UID: "e024b24e-321b-40c7-9574-40685c9a7ac9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.291600 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b4a8933-c57c-4c72-ba77-e6b637a282ee-scripts" (OuterVolumeSpecName: "scripts") pod "3b4a8933-c57c-4c72-ba77-e6b637a282ee" (UID: "3b4a8933-c57c-4c72-ba77-e6b637a282ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.292236 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b4a8933-c57c-4c72-ba77-e6b637a282ee-kube-api-access-g5qcg" (OuterVolumeSpecName: "kube-api-access-g5qcg") pod "3b4a8933-c57c-4c72-ba77-e6b637a282ee" (UID: "3b4a8933-c57c-4c72-ba77-e6b637a282ee"). InnerVolumeSpecName "kube-api-access-g5qcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.293328 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e024b24e-321b-40c7-9574-40685c9a7ac9-kube-api-access-lbk72" (OuterVolumeSpecName: "kube-api-access-lbk72") pod "e024b24e-321b-40c7-9574-40685c9a7ac9" (UID: "e024b24e-321b-40c7-9574-40685c9a7ac9"). InnerVolumeSpecName "kube-api-access-lbk72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.314780 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b4a8933-c57c-4c72-ba77-e6b637a282ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b4a8933-c57c-4c72-ba77-e6b637a282ee" (UID: "3b4a8933-c57c-4c72-ba77-e6b637a282ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.318904 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e024b24e-321b-40c7-9574-40685c9a7ac9" (UID: "e024b24e-321b-40c7-9574-40685c9a7ac9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.325564 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b4a8933-c57c-4c72-ba77-e6b637a282ee-config-data" (OuterVolumeSpecName: "config-data") pod "3b4a8933-c57c-4c72-ba77-e6b637a282ee" (UID: "3b4a8933-c57c-4c72-ba77-e6b637a282ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.385511 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbk72\" (UniqueName: \"kubernetes.io/projected/e024b24e-321b-40c7-9574-40685c9a7ac9-kube-api-access-lbk72\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.385560 4981 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.385581 4981 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e024b24e-321b-40c7-9574-40685c9a7ac9-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.385598 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b4a8933-c57c-4c72-ba77-e6b637a282ee-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.385615 4981 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e024b24e-321b-40c7-9574-40685c9a7ac9-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.385630 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b4a8933-c57c-4c72-ba77-e6b637a282ee-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.385646 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.385660 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4a8933-c57c-4c72-ba77-e6b637a282ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.385677 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5qcg\" (UniqueName: \"kubernetes.io/projected/3b4a8933-c57c-4c72-ba77-e6b637a282ee-kube-api-access-g5qcg\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.397904 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e024b24e-321b-40c7-9574-40685c9a7ac9" (UID: "e024b24e-321b-40c7-9574-40685c9a7ac9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.434505 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-config-data" (OuterVolumeSpecName: "config-data") pod "e024b24e-321b-40c7-9574-40685c9a7ac9" (UID: "e024b24e-321b-40c7-9574-40685c9a7ac9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.488790 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.488857 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e024b24e-321b-40c7-9574-40685c9a7ac9-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.714416 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-clm2b" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.714408 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-clm2b" event={"ID":"3b4a8933-c57c-4c72-ba77-e6b637a282ee","Type":"ContainerDied","Data":"3374d8ae06c19d6183b6df6f336928ded9c7055aa69d4764c21afb66d028365f"} Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.714607 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3374d8ae06c19d6183b6df6f336928ded9c7055aa69d4764c21afb66d028365f" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.719282 4981 generic.go:334] "Generic (PLEG): container finished" podID="e024b24e-321b-40c7-9574-40685c9a7ac9" containerID="fe4d141d3c5312a65aaf2b26c50af0f91db5bf0c12532e893b1f41e3d4438ff0" exitCode=0 Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.719340 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.719381 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e024b24e-321b-40c7-9574-40685c9a7ac9","Type":"ContainerDied","Data":"fe4d141d3c5312a65aaf2b26c50af0f91db5bf0c12532e893b1f41e3d4438ff0"} Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.719479 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e024b24e-321b-40c7-9574-40685c9a7ac9","Type":"ContainerDied","Data":"b62d72127900622fe504f50ee5f352474c6a5c3a9dec412ef76d7a9968f9af4d"} Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.719514 4981 scope.go:117] "RemoveContainer" containerID="01049aa2957dcaffbd9d432affba049ea58cfaa69ef3666b4d3e0477c14bde34" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.749723 4981 scope.go:117] "RemoveContainer" containerID="7e2a04cb6d63eef52af5c27025fcf24005bc7fd2e96846421f51443dbf2b295e" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.782478 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.794329 4981 scope.go:117] "RemoveContainer" containerID="59b9366e6b5f24334ef4e39a69d9cc4f63a6f03301b5edb8a93e375000e378f9" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.803273 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.825087 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:18:04 crc kubenswrapper[4981]: E0227 19:18:04.825727 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e024b24e-321b-40c7-9574-40685c9a7ac9" containerName="sg-core" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.825742 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e024b24e-321b-40c7-9574-40685c9a7ac9" containerName="sg-core" Feb 27 19:18:04 crc kubenswrapper[4981]: E0227 19:18:04.825756 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e024b24e-321b-40c7-9574-40685c9a7ac9" containerName="proxy-httpd" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.825767 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e024b24e-321b-40c7-9574-40685c9a7ac9" containerName="proxy-httpd" Feb 27 19:18:04 crc kubenswrapper[4981]: E0227 19:18:04.825784 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e024b24e-321b-40c7-9574-40685c9a7ac9" containerName="ceilometer-central-agent" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.825792 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e024b24e-321b-40c7-9574-40685c9a7ac9" containerName="ceilometer-central-agent" Feb 27 19:18:04 crc kubenswrapper[4981]: E0227 19:18:04.825805 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4a8933-c57c-4c72-ba77-e6b637a282ee" containerName="nova-cell0-conductor-db-sync" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.825811 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4a8933-c57c-4c72-ba77-e6b637a282ee" containerName="nova-cell0-conductor-db-sync" Feb 27 19:18:04 crc kubenswrapper[4981]: E0227 19:18:04.825848 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e024b24e-321b-40c7-9574-40685c9a7ac9" containerName="ceilometer-notification-agent" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.825855 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e024b24e-321b-40c7-9574-40685c9a7ac9" containerName="ceilometer-notification-agent" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.826130 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e024b24e-321b-40c7-9574-40685c9a7ac9" containerName="ceilometer-central-agent" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.826147 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b4a8933-c57c-4c72-ba77-e6b637a282ee" containerName="nova-cell0-conductor-db-sync" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.826160 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e024b24e-321b-40c7-9574-40685c9a7ac9" containerName="sg-core" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.826176 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e024b24e-321b-40c7-9574-40685c9a7ac9" containerName="proxy-httpd" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.826188 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e024b24e-321b-40c7-9574-40685c9a7ac9" containerName="ceilometer-notification-agent" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.828261 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.836243 4981 scope.go:117] "RemoveContainer" containerID="fe4d141d3c5312a65aaf2b26c50af0f91db5bf0c12532e893b1f41e3d4438ff0" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.847231 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.848443 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.858828 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.876138 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.877774 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.881577 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.881927 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8rxc8" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.902896 4981 scope.go:117] "RemoveContainer" containerID="01049aa2957dcaffbd9d432affba049ea58cfaa69ef3666b4d3e0477c14bde34" Feb 27 19:18:04 crc kubenswrapper[4981]: E0227 19:18:04.905287 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01049aa2957dcaffbd9d432affba049ea58cfaa69ef3666b4d3e0477c14bde34\": container with ID starting with 01049aa2957dcaffbd9d432affba049ea58cfaa69ef3666b4d3e0477c14bde34 not found: ID does not exist" containerID="01049aa2957dcaffbd9d432affba049ea58cfaa69ef3666b4d3e0477c14bde34" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.905321 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01049aa2957dcaffbd9d432affba049ea58cfaa69ef3666b4d3e0477c14bde34"} err="failed to get container status \"01049aa2957dcaffbd9d432affba049ea58cfaa69ef3666b4d3e0477c14bde34\": rpc error: code = NotFound desc = could not find container \"01049aa2957dcaffbd9d432affba049ea58cfaa69ef3666b4d3e0477c14bde34\": container with ID starting with 01049aa2957dcaffbd9d432affba049ea58cfaa69ef3666b4d3e0477c14bde34 not found: ID does not exist" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.905344 4981 scope.go:117] "RemoveContainer" containerID="7e2a04cb6d63eef52af5c27025fcf24005bc7fd2e96846421f51443dbf2b295e" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.906291 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc1d2b74-ba27-49fe-8bb8-c491da856c84-log-httpd\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.906620 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.906642 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-config-data\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.906670 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-scripts\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.906772 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxzxx\" (UniqueName: \"kubernetes.io/projected/bc1d2b74-ba27-49fe-8bb8-c491da856c84-kube-api-access-cxzxx\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.906803 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.906851 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc1d2b74-ba27-49fe-8bb8-c491da856c84-run-httpd\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:04 crc kubenswrapper[4981]: E0227 19:18:04.909882 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e2a04cb6d63eef52af5c27025fcf24005bc7fd2e96846421f51443dbf2b295e\": container with ID starting with 7e2a04cb6d63eef52af5c27025fcf24005bc7fd2e96846421f51443dbf2b295e not found: ID does not exist" containerID="7e2a04cb6d63eef52af5c27025fcf24005bc7fd2e96846421f51443dbf2b295e" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.909959 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e2a04cb6d63eef52af5c27025fcf24005bc7fd2e96846421f51443dbf2b295e"} err="failed to get container status \"7e2a04cb6d63eef52af5c27025fcf24005bc7fd2e96846421f51443dbf2b295e\": rpc error: code = NotFound desc = could not find container \"7e2a04cb6d63eef52af5c27025fcf24005bc7fd2e96846421f51443dbf2b295e\": container with ID starting with 7e2a04cb6d63eef52af5c27025fcf24005bc7fd2e96846421f51443dbf2b295e not found: ID does not exist" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.910045 4981 scope.go:117] "RemoveContainer" containerID="59b9366e6b5f24334ef4e39a69d9cc4f63a6f03301b5edb8a93e375000e378f9" Feb 27 19:18:04 crc kubenswrapper[4981]: E0227 19:18:04.910679 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59b9366e6b5f24334ef4e39a69d9cc4f63a6f03301b5edb8a93e375000e378f9\": container with ID starting with 59b9366e6b5f24334ef4e39a69d9cc4f63a6f03301b5edb8a93e375000e378f9 not found: ID does not exist" containerID="59b9366e6b5f24334ef4e39a69d9cc4f63a6f03301b5edb8a93e375000e378f9" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.910727 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59b9366e6b5f24334ef4e39a69d9cc4f63a6f03301b5edb8a93e375000e378f9"} err="failed to get container status \"59b9366e6b5f24334ef4e39a69d9cc4f63a6f03301b5edb8a93e375000e378f9\": rpc error: code = NotFound desc = could not find container \"59b9366e6b5f24334ef4e39a69d9cc4f63a6f03301b5edb8a93e375000e378f9\": container with ID starting with 59b9366e6b5f24334ef4e39a69d9cc4f63a6f03301b5edb8a93e375000e378f9 not found: ID does not exist" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.910757 4981 scope.go:117] "RemoveContainer" containerID="fe4d141d3c5312a65aaf2b26c50af0f91db5bf0c12532e893b1f41e3d4438ff0" Feb 27 19:18:04 crc kubenswrapper[4981]: E0227 19:18:04.911167 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4d141d3c5312a65aaf2b26c50af0f91db5bf0c12532e893b1f41e3d4438ff0\": container with ID starting with fe4d141d3c5312a65aaf2b26c50af0f91db5bf0c12532e893b1f41e3d4438ff0 not found: ID does not exist" containerID="fe4d141d3c5312a65aaf2b26c50af0f91db5bf0c12532e893b1f41e3d4438ff0" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.911255 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4d141d3c5312a65aaf2b26c50af0f91db5bf0c12532e893b1f41e3d4438ff0"} err="failed to get container status \"fe4d141d3c5312a65aaf2b26c50af0f91db5bf0c12532e893b1f41e3d4438ff0\": rpc error: code = NotFound desc = could not find container \"fe4d141d3c5312a65aaf2b26c50af0f91db5bf0c12532e893b1f41e3d4438ff0\": container with ID starting with fe4d141d3c5312a65aaf2b26c50af0f91db5bf0c12532e893b1f41e3d4438ff0 not found: ID does not exist" Feb 27 19:18:04 crc kubenswrapper[4981]: I0227 19:18:04.912158 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.012163 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxzxx\" (UniqueName: \"kubernetes.io/projected/bc1d2b74-ba27-49fe-8bb8-c491da856c84-kube-api-access-cxzxx\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.012221 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.012255 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc1d2b74-ba27-49fe-8bb8-c491da856c84-run-httpd\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.012310 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc1d2b74-ba27-49fe-8bb8-c491da856c84-log-httpd\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.012378 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a864872-b01a-4a8b-b128-fd427b15a93b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6a864872-b01a-4a8b-b128-fd427b15a93b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.012407 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkh72\" (UniqueName: \"kubernetes.io/projected/6a864872-b01a-4a8b-b128-fd427b15a93b-kube-api-access-nkh72\") pod \"nova-cell0-conductor-0\" (UID: \"6a864872-b01a-4a8b-b128-fd427b15a93b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.012444 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a864872-b01a-4a8b-b128-fd427b15a93b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6a864872-b01a-4a8b-b128-fd427b15a93b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.012464 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.012479 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-config-data\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.012499 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-scripts\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.017948 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc1d2b74-ba27-49fe-8bb8-c491da856c84-run-httpd\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.018586 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-scripts\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.019194 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc1d2b74-ba27-49fe-8bb8-c491da856c84-log-httpd\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.022680 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.022868 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-config-data\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.036151 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.043342 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxzxx\" (UniqueName: \"kubernetes.io/projected/bc1d2b74-ba27-49fe-8bb8-c491da856c84-kube-api-access-cxzxx\") pod \"ceilometer-0\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " pod="openstack/ceilometer-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.114284 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a864872-b01a-4a8b-b128-fd427b15a93b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6a864872-b01a-4a8b-b128-fd427b15a93b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.114335 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkh72\" (UniqueName: \"kubernetes.io/projected/6a864872-b01a-4a8b-b128-fd427b15a93b-kube-api-access-nkh72\") pod \"nova-cell0-conductor-0\" (UID: \"6a864872-b01a-4a8b-b128-fd427b15a93b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.114376 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a864872-b01a-4a8b-b128-fd427b15a93b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6a864872-b01a-4a8b-b128-fd427b15a93b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.118096 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a864872-b01a-4a8b-b128-fd427b15a93b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6a864872-b01a-4a8b-b128-fd427b15a93b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.118620 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a864872-b01a-4a8b-b128-fd427b15a93b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6a864872-b01a-4a8b-b128-fd427b15a93b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.137737 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkh72\" (UniqueName: \"kubernetes.io/projected/6a864872-b01a-4a8b-b128-fd427b15a93b-kube-api-access-nkh72\") pod \"nova-cell0-conductor-0\" (UID: \"6a864872-b01a-4a8b-b128-fd427b15a93b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.168123 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.199381 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.229798 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.271948 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536998-4jhhd" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.421855 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl852\" (UniqueName: \"kubernetes.io/projected/155a2ade-2146-4eb4-8f2a-956e1dbbc1c9-kube-api-access-nl852\") pod \"155a2ade-2146-4eb4-8f2a-956e1dbbc1c9\" (UID: \"155a2ade-2146-4eb4-8f2a-956e1dbbc1c9\") " Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.427212 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/155a2ade-2146-4eb4-8f2a-956e1dbbc1c9-kube-api-access-nl852" (OuterVolumeSpecName: "kube-api-access-nl852") pod "155a2ade-2146-4eb4-8f2a-956e1dbbc1c9" (UID: "155a2ade-2146-4eb4-8f2a-956e1dbbc1c9"). InnerVolumeSpecName "kube-api-access-nl852". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.525828 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl852\" (UniqueName: \"kubernetes.io/projected/155a2ade-2146-4eb4-8f2a-956e1dbbc1c9-kube-api-access-nl852\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.644899 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e024b24e-321b-40c7-9574-40685c9a7ac9" path="/var/lib/kubelet/pods/e024b24e-321b-40c7-9574-40685c9a7ac9/volumes" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.708345 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.721076 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.738952 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536998-4jhhd" event={"ID":"155a2ade-2146-4eb4-8f2a-956e1dbbc1c9","Type":"ContainerDied","Data":"cd94943feb5c03463e53a3908fdfeff328f58be7181ec0cf58d5d10cecabb95a"} Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.739018 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd94943feb5c03463e53a3908fdfeff328f58be7181ec0cf58d5d10cecabb95a" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.739162 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536998-4jhhd" Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.747426 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6a864872-b01a-4a8b-b128-fd427b15a93b","Type":"ContainerStarted","Data":"7f8b83d84de930273a883329453f9e52d3fdf3368d75cc078209b59859fcdd0e"} Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.750722 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc1d2b74-ba27-49fe-8bb8-c491da856c84","Type":"ContainerStarted","Data":"698bfca55359e312fd72022ca831b577746ba9a19bfe16b2c73ccac36e5754e4"} Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.775557 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536992-f82gz"] Feb 27 19:18:05 crc kubenswrapper[4981]: I0227 19:18:05.783904 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536992-f82gz"] Feb 27 19:18:06 crc kubenswrapper[4981]: I0227 19:18:06.638736 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:18:06 crc kubenswrapper[4981]: I0227 19:18:06.761327 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6a864872-b01a-4a8b-b128-fd427b15a93b","Type":"ContainerStarted","Data":"327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580"} Feb 27 19:18:06 crc kubenswrapper[4981]: I0227 19:18:06.761841 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:06 crc kubenswrapper[4981]: I0227 19:18:06.761394 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="6a864872-b01a-4a8b-b128-fd427b15a93b" containerName="nova-cell0-conductor-conductor" containerID="cri-o://327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" gracePeriod=30 Feb 27 19:18:06 crc kubenswrapper[4981]: I0227 19:18:06.765674 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc1d2b74-ba27-49fe-8bb8-c491da856c84","Type":"ContainerStarted","Data":"6c13e64b6d4942110966d0d76db27072f11783373c4c686cef2b7e1f56b032bb"} Feb 27 19:18:06 crc kubenswrapper[4981]: I0227 19:18:06.801142 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.801119024 podStartE2EDuration="2.801119024s" podCreationTimestamp="2026-02-27 19:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:18:06.779531813 +0000 UTC m=+1986.258312973" watchObservedRunningTime="2026-02-27 19:18:06.801119024 +0000 UTC m=+1986.279900184" Feb 27 19:18:07 crc kubenswrapper[4981]: I0227 19:18:07.639948 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74677fdb-8a47-430f-9ede-c884ece1c7c0" path="/var/lib/kubelet/pods/74677fdb-8a47-430f-9ede-c884ece1c7c0/volumes" Feb 27 19:18:07 crc kubenswrapper[4981]: I0227 19:18:07.781908 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc1d2b74-ba27-49fe-8bb8-c491da856c84","Type":"ContainerStarted","Data":"926356b546fd12a9c93f7ea669412e84adf110864a32bd04391f74e5abd95387"} Feb 27 19:18:08 crc kubenswrapper[4981]: I0227 19:18:08.795161 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc1d2b74-ba27-49fe-8bb8-c491da856c84","Type":"ContainerStarted","Data":"71631419bd10a4c614e3ffdc58cc91d40388e8ceb55d260f1a08634045573724"} Feb 27 19:18:10 crc kubenswrapper[4981]: E0227 19:18:10.201656 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:18:10 crc kubenswrapper[4981]: E0227 19:18:10.203819 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:18:10 crc kubenswrapper[4981]: E0227 19:18:10.206018 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:18:10 crc kubenswrapper[4981]: E0227 19:18:10.206142 4981 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="6a864872-b01a-4a8b-b128-fd427b15a93b" containerName="nova-cell0-conductor-conductor" Feb 27 19:18:10 crc kubenswrapper[4981]: I0227 19:18:10.820392 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc1d2b74-ba27-49fe-8bb8-c491da856c84","Type":"ContainerStarted","Data":"67be7b5e824fcbdc879f70ae9731e81b501b7a44e3edcedb0423c456149fa3f2"} Feb 27 19:18:10 crc kubenswrapper[4981]: I0227 19:18:10.820614 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 19:18:10 crc kubenswrapper[4981]: I0227 19:18:10.820655 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" containerName="proxy-httpd" containerID="cri-o://67be7b5e824fcbdc879f70ae9731e81b501b7a44e3edcedb0423c456149fa3f2" gracePeriod=30 Feb 27 19:18:10 crc kubenswrapper[4981]: I0227 19:18:10.820665 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" containerName="ceilometer-central-agent" containerID="cri-o://6c13e64b6d4942110966d0d76db27072f11783373c4c686cef2b7e1f56b032bb" gracePeriod=30 Feb 27 19:18:10 crc kubenswrapper[4981]: I0227 19:18:10.820708 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" containerName="sg-core" containerID="cri-o://71631419bd10a4c614e3ffdc58cc91d40388e8ceb55d260f1a08634045573724" gracePeriod=30 Feb 27 19:18:10 crc kubenswrapper[4981]: I0227 19:18:10.820706 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" containerName="ceilometer-notification-agent" containerID="cri-o://926356b546fd12a9c93f7ea669412e84adf110864a32bd04391f74e5abd95387" gracePeriod=30 Feb 27 19:18:10 crc kubenswrapper[4981]: I0227 19:18:10.851208 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.697813471 podStartE2EDuration="6.851187608s" podCreationTimestamp="2026-02-27 19:18:04 +0000 UTC" firstStartedPulling="2026-02-27 19:18:05.710794533 +0000 UTC m=+1985.189575693" lastFinishedPulling="2026-02-27 19:18:09.86416867 +0000 UTC m=+1989.342949830" observedRunningTime="2026-02-27 19:18:10.843942495 +0000 UTC m=+1990.322723655" watchObservedRunningTime="2026-02-27 19:18:10.851187608 +0000 UTC m=+1990.329968768" Feb 27 19:18:11 crc kubenswrapper[4981]: I0227 19:18:11.831528 4981 generic.go:334] "Generic (PLEG): container finished" podID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" containerID="67be7b5e824fcbdc879f70ae9731e81b501b7a44e3edcedb0423c456149fa3f2" exitCode=0 Feb 27 19:18:11 crc kubenswrapper[4981]: I0227 19:18:11.832326 4981 generic.go:334] "Generic (PLEG): container finished" podID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" containerID="71631419bd10a4c614e3ffdc58cc91d40388e8ceb55d260f1a08634045573724" exitCode=2 Feb 27 19:18:11 crc kubenswrapper[4981]: I0227 19:18:11.832407 4981 generic.go:334] "Generic (PLEG): container finished" podID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" containerID="926356b546fd12a9c93f7ea669412e84adf110864a32bd04391f74e5abd95387" exitCode=0 Feb 27 19:18:11 crc kubenswrapper[4981]: I0227 19:18:11.831596 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc1d2b74-ba27-49fe-8bb8-c491da856c84","Type":"ContainerDied","Data":"67be7b5e824fcbdc879f70ae9731e81b501b7a44e3edcedb0423c456149fa3f2"} Feb 27 19:18:11 crc kubenswrapper[4981]: I0227 19:18:11.832545 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc1d2b74-ba27-49fe-8bb8-c491da856c84","Type":"ContainerDied","Data":"71631419bd10a4c614e3ffdc58cc91d40388e8ceb55d260f1a08634045573724"} Feb 27 19:18:11 crc kubenswrapper[4981]: I0227 19:18:11.832610 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc1d2b74-ba27-49fe-8bb8-c491da856c84","Type":"ContainerDied","Data":"926356b546fd12a9c93f7ea669412e84adf110864a32bd04391f74e5abd95387"} Feb 27 19:18:13 crc kubenswrapper[4981]: I0227 19:18:13.043418 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:18:15 crc kubenswrapper[4981]: E0227 19:18:15.201469 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:18:15 crc kubenswrapper[4981]: E0227 19:18:15.203549 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:18:15 crc kubenswrapper[4981]: E0227 19:18:15.205482 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:18:15 crc kubenswrapper[4981]: E0227 19:18:15.205521 4981 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="6a864872-b01a-4a8b-b128-fd427b15a93b" containerName="nova-cell0-conductor-conductor" Feb 27 19:18:15 crc kubenswrapper[4981]: I0227 19:18:15.873732 4981 generic.go:334] "Generic (PLEG): container finished" podID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" containerID="6c13e64b6d4942110966d0d76db27072f11783373c4c686cef2b7e1f56b032bb" exitCode=0 Feb 27 19:18:15 crc kubenswrapper[4981]: I0227 19:18:15.873874 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc1d2b74-ba27-49fe-8bb8-c491da856c84","Type":"ContainerDied","Data":"6c13e64b6d4942110966d0d76db27072f11783373c4c686cef2b7e1f56b032bb"} Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.418870 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.556872 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-config-data\") pod \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.557304 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxzxx\" (UniqueName: \"kubernetes.io/projected/bc1d2b74-ba27-49fe-8bb8-c491da856c84-kube-api-access-cxzxx\") pod \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.557386 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc1d2b74-ba27-49fe-8bb8-c491da856c84-run-httpd\") pod \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.557448 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-combined-ca-bundle\") pod \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.557561 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-scripts\") pod \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.557645 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-sg-core-conf-yaml\") pod \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.557709 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc1d2b74-ba27-49fe-8bb8-c491da856c84-log-httpd\") pod \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\" (UID: \"bc1d2b74-ba27-49fe-8bb8-c491da856c84\") " Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.557900 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc1d2b74-ba27-49fe-8bb8-c491da856c84-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bc1d2b74-ba27-49fe-8bb8-c491da856c84" (UID: "bc1d2b74-ba27-49fe-8bb8-c491da856c84"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.558278 4981 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc1d2b74-ba27-49fe-8bb8-c491da856c84-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.558617 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc1d2b74-ba27-49fe-8bb8-c491da856c84-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bc1d2b74-ba27-49fe-8bb8-c491da856c84" (UID: "bc1d2b74-ba27-49fe-8bb8-c491da856c84"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.562232 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-scripts" (OuterVolumeSpecName: "scripts") pod "bc1d2b74-ba27-49fe-8bb8-c491da856c84" (UID: "bc1d2b74-ba27-49fe-8bb8-c491da856c84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.562290 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc1d2b74-ba27-49fe-8bb8-c491da856c84-kube-api-access-cxzxx" (OuterVolumeSpecName: "kube-api-access-cxzxx") pod "bc1d2b74-ba27-49fe-8bb8-c491da856c84" (UID: "bc1d2b74-ba27-49fe-8bb8-c491da856c84"). InnerVolumeSpecName "kube-api-access-cxzxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.588428 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bc1d2b74-ba27-49fe-8bb8-c491da856c84" (UID: "bc1d2b74-ba27-49fe-8bb8-c491da856c84"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.635516 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc1d2b74-ba27-49fe-8bb8-c491da856c84" (UID: "bc1d2b74-ba27-49fe-8bb8-c491da856c84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.658299 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-config-data" (OuterVolumeSpecName: "config-data") pod "bc1d2b74-ba27-49fe-8bb8-c491da856c84" (UID: "bc1d2b74-ba27-49fe-8bb8-c491da856c84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.659722 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.659750 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxzxx\" (UniqueName: \"kubernetes.io/projected/bc1d2b74-ba27-49fe-8bb8-c491da856c84-kube-api-access-cxzxx\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.659767 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.659781 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.659792 4981 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc1d2b74-ba27-49fe-8bb8-c491da856c84-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.659805 4981 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc1d2b74-ba27-49fe-8bb8-c491da856c84-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.886608 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc1d2b74-ba27-49fe-8bb8-c491da856c84","Type":"ContainerDied","Data":"698bfca55359e312fd72022ca831b577746ba9a19bfe16b2c73ccac36e5754e4"} Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.886697 4981 scope.go:117] "RemoveContainer" containerID="67be7b5e824fcbdc879f70ae9731e81b501b7a44e3edcedb0423c456149fa3f2" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.886696 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.925732 4981 scope.go:117] "RemoveContainer" containerID="71631419bd10a4c614e3ffdc58cc91d40388e8ceb55d260f1a08634045573724" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.929030 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.950146 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.950647 4981 scope.go:117] "RemoveContainer" containerID="926356b546fd12a9c93f7ea669412e84adf110864a32bd04391f74e5abd95387" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.967907 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:18:16 crc kubenswrapper[4981]: E0227 19:18:16.968387 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" containerName="proxy-httpd" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.968406 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" containerName="proxy-httpd" Feb 27 19:18:16 crc kubenswrapper[4981]: E0227 19:18:16.968435 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" containerName="ceilometer-central-agent" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.968507 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" containerName="ceilometer-central-agent" Feb 27 19:18:16 crc kubenswrapper[4981]: E0227 19:18:16.968603 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" containerName="ceilometer-notification-agent" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.968613 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" containerName="ceilometer-notification-agent" Feb 27 19:18:16 crc kubenswrapper[4981]: E0227 19:18:16.968723 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="155a2ade-2146-4eb4-8f2a-956e1dbbc1c9" containerName="oc" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.968733 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="155a2ade-2146-4eb4-8f2a-956e1dbbc1c9" containerName="oc" Feb 27 19:18:16 crc kubenswrapper[4981]: E0227 19:18:16.968802 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" containerName="sg-core" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.968812 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" containerName="sg-core" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.969148 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="155a2ade-2146-4eb4-8f2a-956e1dbbc1c9" containerName="oc" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.969172 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" containerName="ceilometer-notification-agent" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.969182 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" containerName="sg-core" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.969189 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" containerName="proxy-httpd" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.969199 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" containerName="ceilometer-central-agent" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.972787 4981 scope.go:117] "RemoveContainer" containerID="6c13e64b6d4942110966d0d76db27072f11783373c4c686cef2b7e1f56b032bb" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.975619 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.976879 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.978552 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.979498 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 19:18:16 crc kubenswrapper[4981]: I0227 19:18:16.980469 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.055430 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-647d4dfc78-slpwd"] Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.055691 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-647d4dfc78-slpwd" podUID="5bf50bd5-6795-47b1-a50a-093079710979" containerName="neutron-api" containerID="cri-o://944770747a40efacbf1c2aef296fcbb221224782752b669f1d4a4e73447ad0b2" gracePeriod=30 Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.056211 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-647d4dfc78-slpwd" podUID="5bf50bd5-6795-47b1-a50a-093079710979" containerName="neutron-httpd" containerID="cri-o://ee96d4c90dd898a33ba6e7f046f32c826f786b486a68ad26dfd855005fa5ad42" gracePeriod=30 Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.068411 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.068585 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f1f0012-8326-4378-9870-0da9e2128a42-run-httpd\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.068677 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.068792 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfqhs\" (UniqueName: \"kubernetes.io/projected/4f1f0012-8326-4378-9870-0da9e2128a42-kube-api-access-kfqhs\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.068858 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-config-data\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.068924 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f1f0012-8326-4378-9870-0da9e2128a42-log-httpd\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.069067 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-scripts\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.171105 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.171187 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f1f0012-8326-4378-9870-0da9e2128a42-run-httpd\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.171244 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.171270 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfqhs\" (UniqueName: \"kubernetes.io/projected/4f1f0012-8326-4378-9870-0da9e2128a42-kube-api-access-kfqhs\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.171317 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-config-data\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.171344 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f1f0012-8326-4378-9870-0da9e2128a42-log-httpd\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.171407 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-scripts\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.175260 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f1f0012-8326-4378-9870-0da9e2128a42-log-httpd\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.178979 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-config-data\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.179120 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f1f0012-8326-4378-9870-0da9e2128a42-run-httpd\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.180984 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.181775 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.184632 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-scripts\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.197154 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfqhs\" (UniqueName: \"kubernetes.io/projected/4f1f0012-8326-4378-9870-0da9e2128a42-kube-api-access-kfqhs\") pod \"ceilometer-0\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.301525 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.639357 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc1d2b74-ba27-49fe-8bb8-c491da856c84" path="/var/lib/kubelet/pods/bc1d2b74-ba27-49fe-8bb8-c491da856c84/volumes" Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.783072 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:18:17 crc kubenswrapper[4981]: W0227 19:18:17.797701 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f1f0012_8326_4378_9870_0da9e2128a42.slice/crio-c5efd250e8704917f562c944f6ec88ce1ff75c5b549530a0b119026a2d8dc084 WatchSource:0}: Error finding container c5efd250e8704917f562c944f6ec88ce1ff75c5b549530a0b119026a2d8dc084: Status 404 returned error can't find the container with id c5efd250e8704917f562c944f6ec88ce1ff75c5b549530a0b119026a2d8dc084 Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.902445 4981 generic.go:334] "Generic (PLEG): container finished" podID="5bf50bd5-6795-47b1-a50a-093079710979" containerID="ee96d4c90dd898a33ba6e7f046f32c826f786b486a68ad26dfd855005fa5ad42" exitCode=0 Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.902522 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-647d4dfc78-slpwd" event={"ID":"5bf50bd5-6795-47b1-a50a-093079710979","Type":"ContainerDied","Data":"ee96d4c90dd898a33ba6e7f046f32c826f786b486a68ad26dfd855005fa5ad42"} Feb 27 19:18:17 crc kubenswrapper[4981]: I0227 19:18:17.904144 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f1f0012-8326-4378-9870-0da9e2128a42","Type":"ContainerStarted","Data":"c5efd250e8704917f562c944f6ec88ce1ff75c5b549530a0b119026a2d8dc084"} Feb 27 19:18:18 crc kubenswrapper[4981]: I0227 19:18:18.680436 4981 scope.go:117] "RemoveContainer" containerID="9cf48594755d9db8e2e66fd22d01a7734be5676945d918f8133a713b2764bb34" Feb 27 19:18:18 crc kubenswrapper[4981]: I0227 19:18:18.713511 4981 scope.go:117] "RemoveContainer" containerID="39d5ad09342b70ef57075489904198cdcbe2e48fc3cdd2dd68474762052581d9" Feb 27 19:18:18 crc kubenswrapper[4981]: I0227 19:18:18.752741 4981 scope.go:117] "RemoveContainer" containerID="26e39f1f6bf137b20eaa6d22829be4fae92fac625401e0025afb0e9f99e16dea" Feb 27 19:18:18 crc kubenswrapper[4981]: I0227 19:18:18.800567 4981 scope.go:117] "RemoveContainer" containerID="1d4173a980b74b2b82881942d9c0606a560f6be487e857f7c7eadf0f82dd0572" Feb 27 19:18:18 crc kubenswrapper[4981]: I0227 19:18:18.859710 4981 scope.go:117] "RemoveContainer" containerID="2eaac1853ad432d3fb764761e37438f6ab0922f0575edfce5b4212dcef893563" Feb 27 19:18:18 crc kubenswrapper[4981]: I0227 19:18:18.927437 4981 scope.go:117] "RemoveContainer" containerID="d6dcd57121655160a41e0b97b4fffb64705b3dfd74fd3be4461c8485a6ada855" Feb 27 19:18:18 crc kubenswrapper[4981]: I0227 19:18:18.978992 4981 scope.go:117] "RemoveContainer" containerID="e495f30379ea5eec7fcfc406ec77e013036179cba6329049e46a388769876ad2" Feb 27 19:18:19 crc kubenswrapper[4981]: I0227 19:18:19.035692 4981 scope.go:117] "RemoveContainer" containerID="52679b7eb4bb52e89dc54884eeca292914df6978af804913ebf2b2f17b260c15" Feb 27 19:18:19 crc kubenswrapper[4981]: I0227 19:18:19.938315 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f1f0012-8326-4378-9870-0da9e2128a42","Type":"ContainerStarted","Data":"cf37390c0d281caea3ea0a3e13a5ad34f321561bf275a4b694e4c3e1b563144d"} Feb 27 19:18:20 crc kubenswrapper[4981]: E0227 19:18:20.202259 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:18:20 crc kubenswrapper[4981]: E0227 19:18:20.203622 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:18:20 crc kubenswrapper[4981]: E0227 19:18:20.205033 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:18:20 crc kubenswrapper[4981]: E0227 19:18:20.205087 4981 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="6a864872-b01a-4a8b-b128-fd427b15a93b" containerName="nova-cell0-conductor-conductor" Feb 27 19:18:20 crc kubenswrapper[4981]: I0227 19:18:20.963271 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f1f0012-8326-4378-9870-0da9e2128a42","Type":"ContainerStarted","Data":"39a33d7f90b81049a430bd8fb87e2b3fa095391998769fa7d5f18caca0535e5d"} Feb 27 19:18:21 crc kubenswrapper[4981]: I0227 19:18:21.986584 4981 generic.go:334] "Generic (PLEG): container finished" podID="5bf50bd5-6795-47b1-a50a-093079710979" containerID="944770747a40efacbf1c2aef296fcbb221224782752b669f1d4a4e73447ad0b2" exitCode=0 Feb 27 19:18:21 crc kubenswrapper[4981]: I0227 19:18:21.986761 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-647d4dfc78-slpwd" event={"ID":"5bf50bd5-6795-47b1-a50a-093079710979","Type":"ContainerDied","Data":"944770747a40efacbf1c2aef296fcbb221224782752b669f1d4a4e73447ad0b2"} Feb 27 19:18:21 crc kubenswrapper[4981]: I0227 19:18:21.990196 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f1f0012-8326-4378-9870-0da9e2128a42","Type":"ContainerStarted","Data":"b1b4d22213f44911ebe10a3402ffc2fe9e45bf83ac72fb843b704b20dfc45c5d"} Feb 27 19:18:22 crc kubenswrapper[4981]: I0227 19:18:22.221022 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:18:22 crc kubenswrapper[4981]: I0227 19:18:22.374670 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w7km\" (UniqueName: \"kubernetes.io/projected/5bf50bd5-6795-47b1-a50a-093079710979-kube-api-access-2w7km\") pod \"5bf50bd5-6795-47b1-a50a-093079710979\" (UID: \"5bf50bd5-6795-47b1-a50a-093079710979\") " Feb 27 19:18:22 crc kubenswrapper[4981]: I0227 19:18:22.374782 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-combined-ca-bundle\") pod \"5bf50bd5-6795-47b1-a50a-093079710979\" (UID: \"5bf50bd5-6795-47b1-a50a-093079710979\") " Feb 27 19:18:22 crc kubenswrapper[4981]: I0227 19:18:22.374840 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-ovndb-tls-certs\") pod \"5bf50bd5-6795-47b1-a50a-093079710979\" (UID: \"5bf50bd5-6795-47b1-a50a-093079710979\") " Feb 27 19:18:22 crc kubenswrapper[4981]: I0227 19:18:22.374859 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-httpd-config\") pod \"5bf50bd5-6795-47b1-a50a-093079710979\" (UID: \"5bf50bd5-6795-47b1-a50a-093079710979\") " Feb 27 19:18:22 crc kubenswrapper[4981]: I0227 19:18:22.374963 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-config\") pod \"5bf50bd5-6795-47b1-a50a-093079710979\" (UID: \"5bf50bd5-6795-47b1-a50a-093079710979\") " Feb 27 19:18:22 crc kubenswrapper[4981]: I0227 19:18:22.381687 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bf50bd5-6795-47b1-a50a-093079710979-kube-api-access-2w7km" (OuterVolumeSpecName: "kube-api-access-2w7km") pod "5bf50bd5-6795-47b1-a50a-093079710979" (UID: "5bf50bd5-6795-47b1-a50a-093079710979"). InnerVolumeSpecName "kube-api-access-2w7km". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:18:22 crc kubenswrapper[4981]: I0227 19:18:22.396325 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5bf50bd5-6795-47b1-a50a-093079710979" (UID: "5bf50bd5-6795-47b1-a50a-093079710979"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:18:22 crc kubenswrapper[4981]: I0227 19:18:22.428726 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-config" (OuterVolumeSpecName: "config") pod "5bf50bd5-6795-47b1-a50a-093079710979" (UID: "5bf50bd5-6795-47b1-a50a-093079710979"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:18:22 crc kubenswrapper[4981]: I0227 19:18:22.443163 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bf50bd5-6795-47b1-a50a-093079710979" (UID: "5bf50bd5-6795-47b1-a50a-093079710979"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:18:22 crc kubenswrapper[4981]: I0227 19:18:22.462881 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5bf50bd5-6795-47b1-a50a-093079710979" (UID: "5bf50bd5-6795-47b1-a50a-093079710979"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:18:22 crc kubenswrapper[4981]: I0227 19:18:22.477349 4981 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:22 crc kubenswrapper[4981]: I0227 19:18:22.477386 4981 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:22 crc kubenswrapper[4981]: I0227 19:18:22.477396 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:22 crc kubenswrapper[4981]: I0227 19:18:22.477405 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w7km\" (UniqueName: \"kubernetes.io/projected/5bf50bd5-6795-47b1-a50a-093079710979-kube-api-access-2w7km\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:22 crc kubenswrapper[4981]: I0227 19:18:22.477419 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf50bd5-6795-47b1-a50a-093079710979-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:22 crc kubenswrapper[4981]: I0227 19:18:22.999929 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-647d4dfc78-slpwd" event={"ID":"5bf50bd5-6795-47b1-a50a-093079710979","Type":"ContainerDied","Data":"d500027d02f5524d53f999f81a33375fe6dd76cf932bc1665bf0f07053860e85"} Feb 27 19:18:22 crc kubenswrapper[4981]: I0227 19:18:22.999991 4981 scope.go:117] "RemoveContainer" containerID="ee96d4c90dd898a33ba6e7f046f32c826f786b486a68ad26dfd855005fa5ad42" Feb 27 19:18:23 crc kubenswrapper[4981]: I0227 19:18:23.000142 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-647d4dfc78-slpwd" Feb 27 19:18:23 crc kubenswrapper[4981]: I0227 19:18:23.034578 4981 scope.go:117] "RemoveContainer" containerID="944770747a40efacbf1c2aef296fcbb221224782752b669f1d4a4e73447ad0b2" Feb 27 19:18:23 crc kubenswrapper[4981]: I0227 19:18:23.038287 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-647d4dfc78-slpwd"] Feb 27 19:18:23 crc kubenswrapper[4981]: I0227 19:18:23.051921 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-647d4dfc78-slpwd"] Feb 27 19:18:23 crc kubenswrapper[4981]: I0227 19:18:23.646380 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bf50bd5-6795-47b1-a50a-093079710979" path="/var/lib/kubelet/pods/5bf50bd5-6795-47b1-a50a-093079710979/volumes" Feb 27 19:18:24 crc kubenswrapper[4981]: I0227 19:18:24.013366 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f1f0012-8326-4378-9870-0da9e2128a42","Type":"ContainerStarted","Data":"683781b1d22b4e5d94316ec9c2e6b71289420b5a72944d06a1be1ebb8a8c1acc"} Feb 27 19:18:24 crc kubenswrapper[4981]: I0227 19:18:24.013506 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 19:18:24 crc kubenswrapper[4981]: I0227 19:18:24.041557 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.545104614 podStartE2EDuration="8.041536062s" podCreationTimestamp="2026-02-27 19:18:16 +0000 UTC" firstStartedPulling="2026-02-27 19:18:17.809165526 +0000 UTC m=+1997.287946686" lastFinishedPulling="2026-02-27 19:18:23.305596964 +0000 UTC m=+2002.784378134" observedRunningTime="2026-02-27 19:18:24.037142648 +0000 UTC m=+2003.515923808" watchObservedRunningTime="2026-02-27 19:18:24.041536062 +0000 UTC m=+2003.520317222" Feb 27 19:18:25 crc kubenswrapper[4981]: E0227 19:18:25.202094 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:18:25 crc kubenswrapper[4981]: E0227 19:18:25.203387 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:18:25 crc kubenswrapper[4981]: E0227 19:18:25.204523 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:18:25 crc kubenswrapper[4981]: E0227 19:18:25.204563 4981 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="6a864872-b01a-4a8b-b128-fd427b15a93b" containerName="nova-cell0-conductor-conductor" Feb 27 19:18:30 crc kubenswrapper[4981]: E0227 19:18:30.201311 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:18:30 crc kubenswrapper[4981]: E0227 19:18:30.203689 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:18:30 crc kubenswrapper[4981]: E0227 19:18:30.204852 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:18:30 crc kubenswrapper[4981]: E0227 19:18:30.204927 4981 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="6a864872-b01a-4a8b-b128-fd427b15a93b" containerName="nova-cell0-conductor-conductor" Feb 27 19:18:35 crc kubenswrapper[4981]: E0227 19:18:35.201826 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:18:35 crc kubenswrapper[4981]: E0227 19:18:35.204285 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:18:35 crc kubenswrapper[4981]: E0227 19:18:35.206442 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:18:35 crc kubenswrapper[4981]: E0227 19:18:35.206517 4981 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="6a864872-b01a-4a8b-b128-fd427b15a93b" containerName="nova-cell0-conductor-conductor" Feb 27 19:18:37 crc kubenswrapper[4981]: I0227 19:18:37.135064 4981 generic.go:334] "Generic (PLEG): container finished" podID="6a864872-b01a-4a8b-b128-fd427b15a93b" containerID="327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" exitCode=137 Feb 27 19:18:37 crc kubenswrapper[4981]: I0227 19:18:37.135164 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6a864872-b01a-4a8b-b128-fd427b15a93b","Type":"ContainerDied","Data":"327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580"} Feb 27 19:18:37 crc kubenswrapper[4981]: I0227 19:18:37.135438 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6a864872-b01a-4a8b-b128-fd427b15a93b","Type":"ContainerDied","Data":"7f8b83d84de930273a883329453f9e52d3fdf3368d75cc078209b59859fcdd0e"} Feb 27 19:18:37 crc kubenswrapper[4981]: I0227 19:18:37.135456 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f8b83d84de930273a883329453f9e52d3fdf3368d75cc078209b59859fcdd0e" Feb 27 19:18:37 crc kubenswrapper[4981]: I0227 19:18:37.159678 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:37 crc kubenswrapper[4981]: I0227 19:18:37.291264 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a864872-b01a-4a8b-b128-fd427b15a93b-combined-ca-bundle\") pod \"6a864872-b01a-4a8b-b128-fd427b15a93b\" (UID: \"6a864872-b01a-4a8b-b128-fd427b15a93b\") " Feb 27 19:18:37 crc kubenswrapper[4981]: I0227 19:18:37.291375 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a864872-b01a-4a8b-b128-fd427b15a93b-config-data\") pod \"6a864872-b01a-4a8b-b128-fd427b15a93b\" (UID: \"6a864872-b01a-4a8b-b128-fd427b15a93b\") " Feb 27 19:18:37 crc kubenswrapper[4981]: I0227 19:18:37.291554 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkh72\" (UniqueName: \"kubernetes.io/projected/6a864872-b01a-4a8b-b128-fd427b15a93b-kube-api-access-nkh72\") pod \"6a864872-b01a-4a8b-b128-fd427b15a93b\" (UID: \"6a864872-b01a-4a8b-b128-fd427b15a93b\") " Feb 27 19:18:37 crc kubenswrapper[4981]: I0227 19:18:37.297703 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a864872-b01a-4a8b-b128-fd427b15a93b-kube-api-access-nkh72" (OuterVolumeSpecName: "kube-api-access-nkh72") pod "6a864872-b01a-4a8b-b128-fd427b15a93b" (UID: "6a864872-b01a-4a8b-b128-fd427b15a93b"). InnerVolumeSpecName "kube-api-access-nkh72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:18:37 crc kubenswrapper[4981]: I0227 19:18:37.319510 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a864872-b01a-4a8b-b128-fd427b15a93b-config-data" (OuterVolumeSpecName: "config-data") pod "6a864872-b01a-4a8b-b128-fd427b15a93b" (UID: "6a864872-b01a-4a8b-b128-fd427b15a93b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:18:37 crc kubenswrapper[4981]: I0227 19:18:37.325839 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a864872-b01a-4a8b-b128-fd427b15a93b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a864872-b01a-4a8b-b128-fd427b15a93b" (UID: "6a864872-b01a-4a8b-b128-fd427b15a93b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:18:37 crc kubenswrapper[4981]: I0227 19:18:37.394201 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a864872-b01a-4a8b-b128-fd427b15a93b-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:37 crc kubenswrapper[4981]: I0227 19:18:37.394694 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkh72\" (UniqueName: \"kubernetes.io/projected/6a864872-b01a-4a8b-b128-fd427b15a93b-kube-api-access-nkh72\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:37 crc kubenswrapper[4981]: I0227 19:18:37.394714 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a864872-b01a-4a8b-b128-fd427b15a93b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.146709 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.211228 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.224582 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.244506 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 19:18:38 crc kubenswrapper[4981]: E0227 19:18:38.245233 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf50bd5-6795-47b1-a50a-093079710979" containerName="neutron-api" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.245261 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf50bd5-6795-47b1-a50a-093079710979" containerName="neutron-api" Feb 27 19:18:38 crc kubenswrapper[4981]: E0227 19:18:38.245282 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a864872-b01a-4a8b-b128-fd427b15a93b" containerName="nova-cell0-conductor-conductor" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.245292 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a864872-b01a-4a8b-b128-fd427b15a93b" containerName="nova-cell0-conductor-conductor" Feb 27 19:18:38 crc kubenswrapper[4981]: E0227 19:18:38.245322 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf50bd5-6795-47b1-a50a-093079710979" containerName="neutron-httpd" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.245331 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf50bd5-6795-47b1-a50a-093079710979" containerName="neutron-httpd" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.245584 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bf50bd5-6795-47b1-a50a-093079710979" containerName="neutron-api" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.245613 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bf50bd5-6795-47b1-a50a-093079710979" containerName="neutron-httpd" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.245640 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a864872-b01a-4a8b-b128-fd427b15a93b" containerName="nova-cell0-conductor-conductor" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.246423 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.249822 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.256767 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8rxc8" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.257048 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.412908 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prtc7\" (UniqueName: \"kubernetes.io/projected/e4ec5ec3-4a83-4c2a-adde-600a759fcec2-kube-api-access-prtc7\") pod \"nova-cell0-conductor-0\" (UID: \"e4ec5ec3-4a83-4c2a-adde-600a759fcec2\") " pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.413127 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ec5ec3-4a83-4c2a-adde-600a759fcec2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e4ec5ec3-4a83-4c2a-adde-600a759fcec2\") " pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.413168 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ec5ec3-4a83-4c2a-adde-600a759fcec2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e4ec5ec3-4a83-4c2a-adde-600a759fcec2\") " pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.515074 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ec5ec3-4a83-4c2a-adde-600a759fcec2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e4ec5ec3-4a83-4c2a-adde-600a759fcec2\") " pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.515179 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prtc7\" (UniqueName: \"kubernetes.io/projected/e4ec5ec3-4a83-4c2a-adde-600a759fcec2-kube-api-access-prtc7\") pod \"nova-cell0-conductor-0\" (UID: \"e4ec5ec3-4a83-4c2a-adde-600a759fcec2\") " pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.515292 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ec5ec3-4a83-4c2a-adde-600a759fcec2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e4ec5ec3-4a83-4c2a-adde-600a759fcec2\") " pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.522471 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ec5ec3-4a83-4c2a-adde-600a759fcec2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e4ec5ec3-4a83-4c2a-adde-600a759fcec2\") " pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.523006 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ec5ec3-4a83-4c2a-adde-600a759fcec2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e4ec5ec3-4a83-4c2a-adde-600a759fcec2\") " pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.535354 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prtc7\" (UniqueName: \"kubernetes.io/projected/e4ec5ec3-4a83-4c2a-adde-600a759fcec2-kube-api-access-prtc7\") pod \"nova-cell0-conductor-0\" (UID: \"e4ec5ec3-4a83-4c2a-adde-600a759fcec2\") " pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.564311 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:38 crc kubenswrapper[4981]: I0227 19:18:38.983043 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 19:18:38 crc kubenswrapper[4981]: W0227 19:18:38.985128 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4ec5ec3_4a83_4c2a_adde_600a759fcec2.slice/crio-ead35c69a11ab5026d86278c5d0d4f931ed000b394d657d0b91589213630b7ef WatchSource:0}: Error finding container ead35c69a11ab5026d86278c5d0d4f931ed000b394d657d0b91589213630b7ef: Status 404 returned error can't find the container with id ead35c69a11ab5026d86278c5d0d4f931ed000b394d657d0b91589213630b7ef Feb 27 19:18:39 crc kubenswrapper[4981]: I0227 19:18:39.160179 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e4ec5ec3-4a83-4c2a-adde-600a759fcec2","Type":"ContainerStarted","Data":"9c670261714a51e0bc1dd408854bbb5ff1ed9f5b62828c8c8ece1900fa737f24"} Feb 27 19:18:39 crc kubenswrapper[4981]: I0227 19:18:39.161539 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e4ec5ec3-4a83-4c2a-adde-600a759fcec2","Type":"ContainerStarted","Data":"ead35c69a11ab5026d86278c5d0d4f931ed000b394d657d0b91589213630b7ef"} Feb 27 19:18:39 crc kubenswrapper[4981]: I0227 19:18:39.161658 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:39 crc kubenswrapper[4981]: I0227 19:18:39.177658 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.177634631 podStartE2EDuration="1.177634631s" podCreationTimestamp="2026-02-27 19:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:18:39.174491215 +0000 UTC m=+2018.653272375" watchObservedRunningTime="2026-02-27 19:18:39.177634631 +0000 UTC m=+2018.656415791" Feb 27 19:18:39 crc kubenswrapper[4981]: I0227 19:18:39.639570 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a864872-b01a-4a8b-b128-fd427b15a93b" path="/var/lib/kubelet/pods/6a864872-b01a-4a8b-b128-fd427b15a93b/volumes" Feb 27 19:18:47 crc kubenswrapper[4981]: I0227 19:18:47.305532 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 27 19:18:48 crc kubenswrapper[4981]: I0227 19:18:48.594565 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.194128 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-k82mt"] Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.195744 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k82mt" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.197550 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.198390 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.205457 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-k82mt"] Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.350975 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-k82mt\" (UID: \"25ef4760-0e11-422c-b084-afe3d47fbdac\") " pod="openstack/nova-cell0-cell-mapping-k82mt" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.351189 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-config-data\") pod \"nova-cell0-cell-mapping-k82mt\" (UID: \"25ef4760-0e11-422c-b084-afe3d47fbdac\") " pod="openstack/nova-cell0-cell-mapping-k82mt" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.351344 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dfdb\" (UniqueName: \"kubernetes.io/projected/25ef4760-0e11-422c-b084-afe3d47fbdac-kube-api-access-9dfdb\") pod \"nova-cell0-cell-mapping-k82mt\" (UID: \"25ef4760-0e11-422c-b084-afe3d47fbdac\") " pod="openstack/nova-cell0-cell-mapping-k82mt" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.351407 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-scripts\") pod \"nova-cell0-cell-mapping-k82mt\" (UID: \"25ef4760-0e11-422c-b084-afe3d47fbdac\") " pod="openstack/nova-cell0-cell-mapping-k82mt" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.357528 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.359390 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.361530 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.378784 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.403548 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.404675 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.407215 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.453701 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.454721 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dfdb\" (UniqueName: \"kubernetes.io/projected/25ef4760-0e11-422c-b084-afe3d47fbdac-kube-api-access-9dfdb\") pod \"nova-cell0-cell-mapping-k82mt\" (UID: \"25ef4760-0e11-422c-b084-afe3d47fbdac\") " pod="openstack/nova-cell0-cell-mapping-k82mt" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.454780 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-scripts\") pod \"nova-cell0-cell-mapping-k82mt\" (UID: \"25ef4760-0e11-422c-b084-afe3d47fbdac\") " pod="openstack/nova-cell0-cell-mapping-k82mt" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.454882 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-k82mt\" (UID: \"25ef4760-0e11-422c-b084-afe3d47fbdac\") " pod="openstack/nova-cell0-cell-mapping-k82mt" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.454932 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-config-data\") pod \"nova-cell0-cell-mapping-k82mt\" (UID: \"25ef4760-0e11-422c-b084-afe3d47fbdac\") " pod="openstack/nova-cell0-cell-mapping-k82mt" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.465869 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-scripts\") pod \"nova-cell0-cell-mapping-k82mt\" (UID: \"25ef4760-0e11-422c-b084-afe3d47fbdac\") " pod="openstack/nova-cell0-cell-mapping-k82mt" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.483103 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-k82mt\" (UID: \"25ef4760-0e11-422c-b084-afe3d47fbdac\") " pod="openstack/nova-cell0-cell-mapping-k82mt" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.483806 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-config-data\") pod \"nova-cell0-cell-mapping-k82mt\" (UID: \"25ef4760-0e11-422c-b084-afe3d47fbdac\") " pod="openstack/nova-cell0-cell-mapping-k82mt" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.497422 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.498306 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dfdb\" (UniqueName: \"kubernetes.io/projected/25ef4760-0e11-422c-b084-afe3d47fbdac-kube-api-access-9dfdb\") pod \"nova-cell0-cell-mapping-k82mt\" (UID: \"25ef4760-0e11-422c-b084-afe3d47fbdac\") " pod="openstack/nova-cell0-cell-mapping-k82mt" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.504842 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.512514 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.512785 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.522781 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k82mt" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.531186 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.532443 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.537709 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.540563 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.574514 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbv7z\" (UniqueName: \"kubernetes.io/projected/292840f2-f0b1-4bcd-9787-225d6c1a3e51-kube-api-access-qbv7z\") pod \"nova-cell1-novncproxy-0\" (UID: \"292840f2-f0b1-4bcd-9787-225d6c1a3e51\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.574610 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/292840f2-f0b1-4bcd-9787-225d6c1a3e51-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"292840f2-f0b1-4bcd-9787-225d6c1a3e51\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.574642 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8\") " pod="openstack/nova-api-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.574700 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvkk5\" (UniqueName: \"kubernetes.io/projected/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-kube-api-access-dvkk5\") pod \"nova-api-0\" (UID: \"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8\") " pod="openstack/nova-api-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.574751 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-logs\") pod \"nova-api-0\" (UID: \"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8\") " pod="openstack/nova-api-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.574771 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/292840f2-f0b1-4bcd-9787-225d6c1a3e51-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"292840f2-f0b1-4bcd-9787-225d6c1a3e51\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.574791 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-config-data\") pod \"nova-api-0\" (UID: \"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8\") " pod="openstack/nova-api-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.658289 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-tfm5p"] Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.660229 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.671919 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-tfm5p"] Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.689456 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832039c9-1940-4ebc-ba30-2e16f468e3f0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"832039c9-1940-4ebc-ba30-2e16f468e3f0\") " pod="openstack/nova-metadata-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.689592 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-config\") pod \"dnsmasq-dns-757b4f8459-tfm5p\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.689662 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-logs\") pod \"nova-api-0\" (UID: \"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8\") " pod="openstack/nova-api-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.689688 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-tfm5p\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.689708 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/292840f2-f0b1-4bcd-9787-225d6c1a3e51-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"292840f2-f0b1-4bcd-9787-225d6c1a3e51\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.689729 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-config-data\") pod \"nova-api-0\" (UID: \"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8\") " pod="openstack/nova-api-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.689770 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62m66\" (UniqueName: \"kubernetes.io/projected/770989f6-5783-4874-96fd-6fc1a6ea0757-kube-api-access-62m66\") pod \"nova-scheduler-0\" (UID: \"770989f6-5783-4874-96fd-6fc1a6ea0757\") " pod="openstack/nova-scheduler-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.689795 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbv7z\" (UniqueName: \"kubernetes.io/projected/292840f2-f0b1-4bcd-9787-225d6c1a3e51-kube-api-access-qbv7z\") pod \"nova-cell1-novncproxy-0\" (UID: \"292840f2-f0b1-4bcd-9787-225d6c1a3e51\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.689812 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/770989f6-5783-4874-96fd-6fc1a6ea0757-config-data\") pod \"nova-scheduler-0\" (UID: \"770989f6-5783-4874-96fd-6fc1a6ea0757\") " pod="openstack/nova-scheduler-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.689844 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/832039c9-1940-4ebc-ba30-2e16f468e3f0-logs\") pod \"nova-metadata-0\" (UID: \"832039c9-1940-4ebc-ba30-2e16f468e3f0\") " pod="openstack/nova-metadata-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.689873 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-dns-svc\") pod \"dnsmasq-dns-757b4f8459-tfm5p\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.689903 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770989f6-5783-4874-96fd-6fc1a6ea0757-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"770989f6-5783-4874-96fd-6fc1a6ea0757\") " pod="openstack/nova-scheduler-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.689931 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/832039c9-1940-4ebc-ba30-2e16f468e3f0-config-data\") pod \"nova-metadata-0\" (UID: \"832039c9-1940-4ebc-ba30-2e16f468e3f0\") " pod="openstack/nova-metadata-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.689984 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/292840f2-f0b1-4bcd-9787-225d6c1a3e51-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"292840f2-f0b1-4bcd-9787-225d6c1a3e51\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.690036 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8\") " pod="openstack/nova-api-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.690091 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-tfm5p\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.690148 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-tfm5p\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.690175 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkpp7\" (UniqueName: \"kubernetes.io/projected/832039c9-1940-4ebc-ba30-2e16f468e3f0-kube-api-access-hkpp7\") pod \"nova-metadata-0\" (UID: \"832039c9-1940-4ebc-ba30-2e16f468e3f0\") " pod="openstack/nova-metadata-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.690197 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgf7g\" (UniqueName: \"kubernetes.io/projected/5f0b93e5-3670-4873-8376-fdf1281ae2b4-kube-api-access-vgf7g\") pod \"dnsmasq-dns-757b4f8459-tfm5p\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.690230 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvkk5\" (UniqueName: \"kubernetes.io/projected/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-kube-api-access-dvkk5\") pod \"nova-api-0\" (UID: \"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8\") " pod="openstack/nova-api-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.692091 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-logs\") pod \"nova-api-0\" (UID: \"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8\") " pod="openstack/nova-api-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.700253 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8\") " pod="openstack/nova-api-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.700753 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-config-data\") pod \"nova-api-0\" (UID: \"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8\") " pod="openstack/nova-api-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.713793 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/292840f2-f0b1-4bcd-9787-225d6c1a3e51-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"292840f2-f0b1-4bcd-9787-225d6c1a3e51\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.718120 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbv7z\" (UniqueName: \"kubernetes.io/projected/292840f2-f0b1-4bcd-9787-225d6c1a3e51-kube-api-access-qbv7z\") pod \"nova-cell1-novncproxy-0\" (UID: \"292840f2-f0b1-4bcd-9787-225d6c1a3e51\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.721232 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/292840f2-f0b1-4bcd-9787-225d6c1a3e51-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"292840f2-f0b1-4bcd-9787-225d6c1a3e51\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.729018 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.740255 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvkk5\" (UniqueName: \"kubernetes.io/projected/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-kube-api-access-dvkk5\") pod \"nova-api-0\" (UID: \"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8\") " pod="openstack/nova-api-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.796085 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832039c9-1940-4ebc-ba30-2e16f468e3f0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"832039c9-1940-4ebc-ba30-2e16f468e3f0\") " pod="openstack/nova-metadata-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.796133 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-config\") pod \"dnsmasq-dns-757b4f8459-tfm5p\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.796159 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-tfm5p\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.796190 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62m66\" (UniqueName: \"kubernetes.io/projected/770989f6-5783-4874-96fd-6fc1a6ea0757-kube-api-access-62m66\") pod \"nova-scheduler-0\" (UID: \"770989f6-5783-4874-96fd-6fc1a6ea0757\") " pod="openstack/nova-scheduler-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.796212 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/770989f6-5783-4874-96fd-6fc1a6ea0757-config-data\") pod \"nova-scheduler-0\" (UID: \"770989f6-5783-4874-96fd-6fc1a6ea0757\") " pod="openstack/nova-scheduler-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.796235 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/832039c9-1940-4ebc-ba30-2e16f468e3f0-logs\") pod \"nova-metadata-0\" (UID: \"832039c9-1940-4ebc-ba30-2e16f468e3f0\") " pod="openstack/nova-metadata-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.796264 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-dns-svc\") pod \"dnsmasq-dns-757b4f8459-tfm5p\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.796281 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770989f6-5783-4874-96fd-6fc1a6ea0757-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"770989f6-5783-4874-96fd-6fc1a6ea0757\") " pod="openstack/nova-scheduler-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.796298 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/832039c9-1940-4ebc-ba30-2e16f468e3f0-config-data\") pod \"nova-metadata-0\" (UID: \"832039c9-1940-4ebc-ba30-2e16f468e3f0\") " pod="openstack/nova-metadata-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.796717 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-tfm5p\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.796779 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-tfm5p\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.796813 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkpp7\" (UniqueName: \"kubernetes.io/projected/832039c9-1940-4ebc-ba30-2e16f468e3f0-kube-api-access-hkpp7\") pod \"nova-metadata-0\" (UID: \"832039c9-1940-4ebc-ba30-2e16f468e3f0\") " pod="openstack/nova-metadata-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.796832 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgf7g\" (UniqueName: \"kubernetes.io/projected/5f0b93e5-3670-4873-8376-fdf1281ae2b4-kube-api-access-vgf7g\") pod \"dnsmasq-dns-757b4f8459-tfm5p\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.800210 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-dns-svc\") pod \"dnsmasq-dns-757b4f8459-tfm5p\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.800784 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-config\") pod \"dnsmasq-dns-757b4f8459-tfm5p\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.801728 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/832039c9-1940-4ebc-ba30-2e16f468e3f0-logs\") pod \"nova-metadata-0\" (UID: \"832039c9-1940-4ebc-ba30-2e16f468e3f0\") " pod="openstack/nova-metadata-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.802211 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-tfm5p\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.803870 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-tfm5p\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.805804 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-tfm5p\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.811491 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770989f6-5783-4874-96fd-6fc1a6ea0757-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"770989f6-5783-4874-96fd-6fc1a6ea0757\") " pod="openstack/nova-scheduler-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.812000 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832039c9-1940-4ebc-ba30-2e16f468e3f0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"832039c9-1940-4ebc-ba30-2e16f468e3f0\") " pod="openstack/nova-metadata-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.812219 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/770989f6-5783-4874-96fd-6fc1a6ea0757-config-data\") pod \"nova-scheduler-0\" (UID: \"770989f6-5783-4874-96fd-6fc1a6ea0757\") " pod="openstack/nova-scheduler-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.812905 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/832039c9-1940-4ebc-ba30-2e16f468e3f0-config-data\") pod \"nova-metadata-0\" (UID: \"832039c9-1940-4ebc-ba30-2e16f468e3f0\") " pod="openstack/nova-metadata-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.813633 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgf7g\" (UniqueName: \"kubernetes.io/projected/5f0b93e5-3670-4873-8376-fdf1281ae2b4-kube-api-access-vgf7g\") pod \"dnsmasq-dns-757b4f8459-tfm5p\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.820181 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62m66\" (UniqueName: \"kubernetes.io/projected/770989f6-5783-4874-96fd-6fc1a6ea0757-kube-api-access-62m66\") pod \"nova-scheduler-0\" (UID: \"770989f6-5783-4874-96fd-6fc1a6ea0757\") " pod="openstack/nova-scheduler-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.821102 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkpp7\" (UniqueName: \"kubernetes.io/projected/832039c9-1940-4ebc-ba30-2e16f468e3f0-kube-api-access-hkpp7\") pod \"nova-metadata-0\" (UID: \"832039c9-1940-4ebc-ba30-2e16f468e3f0\") " pod="openstack/nova-metadata-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.825792 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.976687 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.977349 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 19:18:49 crc kubenswrapper[4981]: I0227 19:18:49.989435 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.219266 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9sxrp"] Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.220776 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9sxrp" Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.241943 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.242466 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.262398 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9sxrp"] Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.288656 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-k82mt"] Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.359014 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.412680 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1cb5d15-1a22-4c56-a028-11eb02f9e043-scripts\") pod \"nova-cell1-conductor-db-sync-9sxrp\" (UID: \"f1cb5d15-1a22-4c56-a028-11eb02f9e043\") " pod="openstack/nova-cell1-conductor-db-sync-9sxrp" Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.412777 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1cb5d15-1a22-4c56-a028-11eb02f9e043-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9sxrp\" (UID: \"f1cb5d15-1a22-4c56-a028-11eb02f9e043\") " pod="openstack/nova-cell1-conductor-db-sync-9sxrp" Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.412816 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92rl8\" (UniqueName: \"kubernetes.io/projected/f1cb5d15-1a22-4c56-a028-11eb02f9e043-kube-api-access-92rl8\") pod \"nova-cell1-conductor-db-sync-9sxrp\" (UID: \"f1cb5d15-1a22-4c56-a028-11eb02f9e043\") " pod="openstack/nova-cell1-conductor-db-sync-9sxrp" Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.412859 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1cb5d15-1a22-4c56-a028-11eb02f9e043-config-data\") pod \"nova-cell1-conductor-db-sync-9sxrp\" (UID: \"f1cb5d15-1a22-4c56-a028-11eb02f9e043\") " pod="openstack/nova-cell1-conductor-db-sync-9sxrp" Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.472347 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-tfm5p"] Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.514500 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1cb5d15-1a22-4c56-a028-11eb02f9e043-scripts\") pod \"nova-cell1-conductor-db-sync-9sxrp\" (UID: \"f1cb5d15-1a22-4c56-a028-11eb02f9e043\") " pod="openstack/nova-cell1-conductor-db-sync-9sxrp" Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.515096 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1cb5d15-1a22-4c56-a028-11eb02f9e043-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9sxrp\" (UID: \"f1cb5d15-1a22-4c56-a028-11eb02f9e043\") " pod="openstack/nova-cell1-conductor-db-sync-9sxrp" Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.515145 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92rl8\" (UniqueName: \"kubernetes.io/projected/f1cb5d15-1a22-4c56-a028-11eb02f9e043-kube-api-access-92rl8\") pod \"nova-cell1-conductor-db-sync-9sxrp\" (UID: \"f1cb5d15-1a22-4c56-a028-11eb02f9e043\") " pod="openstack/nova-cell1-conductor-db-sync-9sxrp" Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.515202 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1cb5d15-1a22-4c56-a028-11eb02f9e043-config-data\") pod \"nova-cell1-conductor-db-sync-9sxrp\" (UID: \"f1cb5d15-1a22-4c56-a028-11eb02f9e043\") " pod="openstack/nova-cell1-conductor-db-sync-9sxrp" Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.520770 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1cb5d15-1a22-4c56-a028-11eb02f9e043-scripts\") pod \"nova-cell1-conductor-db-sync-9sxrp\" (UID: \"f1cb5d15-1a22-4c56-a028-11eb02f9e043\") " pod="openstack/nova-cell1-conductor-db-sync-9sxrp" Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.520850 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1cb5d15-1a22-4c56-a028-11eb02f9e043-config-data\") pod \"nova-cell1-conductor-db-sync-9sxrp\" (UID: \"f1cb5d15-1a22-4c56-a028-11eb02f9e043\") " pod="openstack/nova-cell1-conductor-db-sync-9sxrp" Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.523528 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1cb5d15-1a22-4c56-a028-11eb02f9e043-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9sxrp\" (UID: \"f1cb5d15-1a22-4c56-a028-11eb02f9e043\") " pod="openstack/nova-cell1-conductor-db-sync-9sxrp" Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.536591 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92rl8\" (UniqueName: \"kubernetes.io/projected/f1cb5d15-1a22-4c56-a028-11eb02f9e043-kube-api-access-92rl8\") pod \"nova-cell1-conductor-db-sync-9sxrp\" (UID: \"f1cb5d15-1a22-4c56-a028-11eb02f9e043\") " pod="openstack/nova-cell1-conductor-db-sync-9sxrp" Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.542645 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9sxrp" Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.601720 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:18:50 crc kubenswrapper[4981]: W0227 19:18:50.624062 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod832039c9_1940_4ebc_ba30_2e16f468e3f0.slice/crio-9a1dd136b80d15fe871204f78373400887f64abe87bcd7cacdc09024774c0f84 WatchSource:0}: Error finding container 9a1dd136b80d15fe871204f78373400887f64abe87bcd7cacdc09024774c0f84: Status 404 returned error can't find the container with id 9a1dd136b80d15fe871204f78373400887f64abe87bcd7cacdc09024774c0f84 Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.627857 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 19:18:50 crc kubenswrapper[4981]: W0227 19:18:50.637593 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bfee6e4_4ff1_42fb_b40d_9d4f6d046dd8.slice/crio-5b547332633a6c9e68d290ac09b2289fe58052dea800bef97c02e046048a0430 WatchSource:0}: Error finding container 5b547332633a6c9e68d290ac09b2289fe58052dea800bef97c02e046048a0430: Status 404 returned error can't find the container with id 5b547332633a6c9e68d290ac09b2289fe58052dea800bef97c02e046048a0430 Feb 27 19:18:50 crc kubenswrapper[4981]: I0227 19:18:50.716499 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 19:18:51 crc kubenswrapper[4981]: I0227 19:18:51.034032 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9sxrp"] Feb 27 19:18:51 crc kubenswrapper[4981]: W0227 19:18:51.056530 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1cb5d15_1a22_4c56_a028_11eb02f9e043.slice/crio-6794709cda8cd4fdc67a3918dd7b221d1ce77268656702a18fe63d0515d1562c WatchSource:0}: Error finding container 6794709cda8cd4fdc67a3918dd7b221d1ce77268656702a18fe63d0515d1562c: Status 404 returned error can't find the container with id 6794709cda8cd4fdc67a3918dd7b221d1ce77268656702a18fe63d0515d1562c Feb 27 19:18:51 crc kubenswrapper[4981]: I0227 19:18:51.275719 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"292840f2-f0b1-4bcd-9787-225d6c1a3e51","Type":"ContainerStarted","Data":"2bef6d1e4227fa1dc177c4db85247d2daf0e2083e8102af7ac03a9e51fa051e8"} Feb 27 19:18:51 crc kubenswrapper[4981]: I0227 19:18:51.280713 4981 generic.go:334] "Generic (PLEG): container finished" podID="5f0b93e5-3670-4873-8376-fdf1281ae2b4" containerID="ce5923375012ec6f5ac3abb4bd3da961e135a04c98a2f4682e7650a78d0ab345" exitCode=0 Feb 27 19:18:51 crc kubenswrapper[4981]: I0227 19:18:51.280793 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" event={"ID":"5f0b93e5-3670-4873-8376-fdf1281ae2b4","Type":"ContainerDied","Data":"ce5923375012ec6f5ac3abb4bd3da961e135a04c98a2f4682e7650a78d0ab345"} Feb 27 19:18:51 crc kubenswrapper[4981]: I0227 19:18:51.280827 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" event={"ID":"5f0b93e5-3670-4873-8376-fdf1281ae2b4","Type":"ContainerStarted","Data":"e986f89b43f7d489036a5e537a7b3773846435a4ce56f3b4bb7633d11f259ddc"} Feb 27 19:18:51 crc kubenswrapper[4981]: I0227 19:18:51.292306 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"832039c9-1940-4ebc-ba30-2e16f468e3f0","Type":"ContainerStarted","Data":"9a1dd136b80d15fe871204f78373400887f64abe87bcd7cacdc09024774c0f84"} Feb 27 19:18:51 crc kubenswrapper[4981]: I0227 19:18:51.303550 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k82mt" event={"ID":"25ef4760-0e11-422c-b084-afe3d47fbdac","Type":"ContainerStarted","Data":"37d59483cf58af393b8691733cb08b1211746466334fa7c73d6a0931c5a94550"} Feb 27 19:18:51 crc kubenswrapper[4981]: I0227 19:18:51.303595 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k82mt" event={"ID":"25ef4760-0e11-422c-b084-afe3d47fbdac","Type":"ContainerStarted","Data":"b0c8e285ae1569b300ebacf2189587bb42a6a10e52b2e618e43c44da8ea7c887"} Feb 27 19:18:51 crc kubenswrapper[4981]: I0227 19:18:51.311634 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9sxrp" event={"ID":"f1cb5d15-1a22-4c56-a028-11eb02f9e043","Type":"ContainerStarted","Data":"007fd137645033b11c92c0c281ae4b91b4c7023beffad98c9d943dcce0e7b915"} Feb 27 19:18:51 crc kubenswrapper[4981]: I0227 19:18:51.311701 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9sxrp" event={"ID":"f1cb5d15-1a22-4c56-a028-11eb02f9e043","Type":"ContainerStarted","Data":"6794709cda8cd4fdc67a3918dd7b221d1ce77268656702a18fe63d0515d1562c"} Feb 27 19:18:51 crc kubenswrapper[4981]: I0227 19:18:51.325857 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"770989f6-5783-4874-96fd-6fc1a6ea0757","Type":"ContainerStarted","Data":"ce24d85e01034d6e39f74928a6f1f76c3e28cf36d92d031c5acbc00207b868ca"} Feb 27 19:18:51 crc kubenswrapper[4981]: I0227 19:18:51.333455 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8","Type":"ContainerStarted","Data":"5b547332633a6c9e68d290ac09b2289fe58052dea800bef97c02e046048a0430"} Feb 27 19:18:51 crc kubenswrapper[4981]: I0227 19:18:51.334504 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-k82mt" podStartSLOduration=2.334483888 podStartE2EDuration="2.334483888s" podCreationTimestamp="2026-02-27 19:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:18:51.32084371 +0000 UTC m=+2030.799624870" watchObservedRunningTime="2026-02-27 19:18:51.334483888 +0000 UTC m=+2030.813265048" Feb 27 19:18:51 crc kubenswrapper[4981]: I0227 19:18:51.342189 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-9sxrp" podStartSLOduration=1.342170044 podStartE2EDuration="1.342170044s" podCreationTimestamp="2026-02-27 19:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:18:51.339827342 +0000 UTC m=+2030.818608512" watchObservedRunningTime="2026-02-27 19:18:51.342170044 +0000 UTC m=+2030.820951194" Feb 27 19:18:52 crc kubenswrapper[4981]: I0227 19:18:52.345274 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" event={"ID":"5f0b93e5-3670-4873-8376-fdf1281ae2b4","Type":"ContainerStarted","Data":"5875c960bc5861171fc7155ae6202ee5108b771f739acb2c3f048136bc2e6b8b"} Feb 27 19:18:52 crc kubenswrapper[4981]: I0227 19:18:52.363311 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" podStartSLOduration=3.363287916 podStartE2EDuration="3.363287916s" podCreationTimestamp="2026-02-27 19:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:18:52.360758838 +0000 UTC m=+2031.839540008" watchObservedRunningTime="2026-02-27 19:18:52.363287916 +0000 UTC m=+2031.842069076" Feb 27 19:18:52 crc kubenswrapper[4981]: I0227 19:18:52.647624 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:18:52 crc kubenswrapper[4981]: I0227 19:18:52.663434 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 19:18:53 crc kubenswrapper[4981]: I0227 19:18:53.354451 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:54 crc kubenswrapper[4981]: I0227 19:18:54.367842 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"770989f6-5783-4874-96fd-6fc1a6ea0757","Type":"ContainerStarted","Data":"041e2383b18a3d3d9bc0cb4291a7cc0041aaec7f99f62945dc6f15a8a8d352b1"} Feb 27 19:18:54 crc kubenswrapper[4981]: I0227 19:18:54.369414 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8","Type":"ContainerStarted","Data":"d97f9378864dc341a7cf6aac96c7d09b250d90254a151f0c12b37f2f1de3fe5c"} Feb 27 19:18:54 crc kubenswrapper[4981]: I0227 19:18:54.370971 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="292840f2-f0b1-4bcd-9787-225d6c1a3e51" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://d4c8b5d08d228e7a4379977b7273e89ea61c13394c3789bba9da06a9a55ff18e" gracePeriod=30 Feb 27 19:18:54 crc kubenswrapper[4981]: I0227 19:18:54.371004 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"292840f2-f0b1-4bcd-9787-225d6c1a3e51","Type":"ContainerStarted","Data":"d4c8b5d08d228e7a4379977b7273e89ea61c13394c3789bba9da06a9a55ff18e"} Feb 27 19:18:54 crc kubenswrapper[4981]: I0227 19:18:54.374832 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"832039c9-1940-4ebc-ba30-2e16f468e3f0","Type":"ContainerStarted","Data":"0afd95187056cc95130b5281fa2139c9a3f7db3666a4c389d87d75c269f893ca"} Feb 27 19:18:54 crc kubenswrapper[4981]: I0227 19:18:54.403205 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.234904064 podStartE2EDuration="5.403184287s" podCreationTimestamp="2026-02-27 19:18:49 +0000 UTC" firstStartedPulling="2026-02-27 19:18:50.731216554 +0000 UTC m=+2030.209997714" lastFinishedPulling="2026-02-27 19:18:53.899496787 +0000 UTC m=+2033.378277937" observedRunningTime="2026-02-27 19:18:54.393973855 +0000 UTC m=+2033.872755015" watchObservedRunningTime="2026-02-27 19:18:54.403184287 +0000 UTC m=+2033.881965447" Feb 27 19:18:54 crc kubenswrapper[4981]: I0227 19:18:54.418230 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.999117242 podStartE2EDuration="5.418212019s" podCreationTimestamp="2026-02-27 19:18:49 +0000 UTC" firstStartedPulling="2026-02-27 19:18:50.478457511 +0000 UTC m=+2029.957238671" lastFinishedPulling="2026-02-27 19:18:53.897552288 +0000 UTC m=+2033.376333448" observedRunningTime="2026-02-27 19:18:54.409704398 +0000 UTC m=+2033.888485558" watchObservedRunningTime="2026-02-27 19:18:54.418212019 +0000 UTC m=+2033.896993179" Feb 27 19:18:54 crc kubenswrapper[4981]: I0227 19:18:54.730133 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:18:54 crc kubenswrapper[4981]: I0227 19:18:54.991323 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 27 19:18:55 crc kubenswrapper[4981]: I0227 19:18:55.385677 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8","Type":"ContainerStarted","Data":"015f9b1ec9b883b36119c3474e2c485e3b8c21b974226a67510d92408f36a1ce"} Feb 27 19:18:55 crc kubenswrapper[4981]: I0227 19:18:55.389424 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="832039c9-1940-4ebc-ba30-2e16f468e3f0" containerName="nova-metadata-log" containerID="cri-o://0afd95187056cc95130b5281fa2139c9a3f7db3666a4c389d87d75c269f893ca" gracePeriod=30 Feb 27 19:18:55 crc kubenswrapper[4981]: I0227 19:18:55.389669 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"832039c9-1940-4ebc-ba30-2e16f468e3f0","Type":"ContainerStarted","Data":"8faf674dfcb6aa7391089710dd0ddb54cd2a3cdc58ff4cc210be1dd525b0b5b6"} Feb 27 19:18:55 crc kubenswrapper[4981]: I0227 19:18:55.389733 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="832039c9-1940-4ebc-ba30-2e16f468e3f0" containerName="nova-metadata-metadata" containerID="cri-o://8faf674dfcb6aa7391089710dd0ddb54cd2a3cdc58ff4cc210be1dd525b0b5b6" gracePeriod=30 Feb 27 19:18:55 crc kubenswrapper[4981]: I0227 19:18:55.413632 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.151596903 podStartE2EDuration="6.413612611s" podCreationTimestamp="2026-02-27 19:18:49 +0000 UTC" firstStartedPulling="2026-02-27 19:18:50.640375548 +0000 UTC m=+2030.119156708" lastFinishedPulling="2026-02-27 19:18:53.902391256 +0000 UTC m=+2033.381172416" observedRunningTime="2026-02-27 19:18:55.406595406 +0000 UTC m=+2034.885376566" watchObservedRunningTime="2026-02-27 19:18:55.413612611 +0000 UTC m=+2034.892393771" Feb 27 19:18:55 crc kubenswrapper[4981]: I0227 19:18:55.429904 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.155667208 podStartE2EDuration="6.42988332s" podCreationTimestamp="2026-02-27 19:18:49 +0000 UTC" firstStartedPulling="2026-02-27 19:18:50.626470981 +0000 UTC m=+2030.105252151" lastFinishedPulling="2026-02-27 19:18:53.900687103 +0000 UTC m=+2033.379468263" observedRunningTime="2026-02-27 19:18:55.422935117 +0000 UTC m=+2034.901716277" watchObservedRunningTime="2026-02-27 19:18:55.42988332 +0000 UTC m=+2034.908664480" Feb 27 19:18:55 crc kubenswrapper[4981]: I0227 19:18:55.658593 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.210589 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.264560 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832039c9-1940-4ebc-ba30-2e16f468e3f0-combined-ca-bundle\") pod \"832039c9-1940-4ebc-ba30-2e16f468e3f0\" (UID: \"832039c9-1940-4ebc-ba30-2e16f468e3f0\") " Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.264709 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/832039c9-1940-4ebc-ba30-2e16f468e3f0-config-data\") pod \"832039c9-1940-4ebc-ba30-2e16f468e3f0\" (UID: \"832039c9-1940-4ebc-ba30-2e16f468e3f0\") " Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.264748 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/832039c9-1940-4ebc-ba30-2e16f468e3f0-logs\") pod \"832039c9-1940-4ebc-ba30-2e16f468e3f0\" (UID: \"832039c9-1940-4ebc-ba30-2e16f468e3f0\") " Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.264834 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkpp7\" (UniqueName: \"kubernetes.io/projected/832039c9-1940-4ebc-ba30-2e16f468e3f0-kube-api-access-hkpp7\") pod \"832039c9-1940-4ebc-ba30-2e16f468e3f0\" (UID: \"832039c9-1940-4ebc-ba30-2e16f468e3f0\") " Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.265137 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/832039c9-1940-4ebc-ba30-2e16f468e3f0-logs" (OuterVolumeSpecName: "logs") pod "832039c9-1940-4ebc-ba30-2e16f468e3f0" (UID: "832039c9-1940-4ebc-ba30-2e16f468e3f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.265417 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/832039c9-1940-4ebc-ba30-2e16f468e3f0-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.270730 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/832039c9-1940-4ebc-ba30-2e16f468e3f0-kube-api-access-hkpp7" (OuterVolumeSpecName: "kube-api-access-hkpp7") pod "832039c9-1940-4ebc-ba30-2e16f468e3f0" (UID: "832039c9-1940-4ebc-ba30-2e16f468e3f0"). InnerVolumeSpecName "kube-api-access-hkpp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.294359 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832039c9-1940-4ebc-ba30-2e16f468e3f0-config-data" (OuterVolumeSpecName: "config-data") pod "832039c9-1940-4ebc-ba30-2e16f468e3f0" (UID: "832039c9-1940-4ebc-ba30-2e16f468e3f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.299150 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832039c9-1940-4ebc-ba30-2e16f468e3f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "832039c9-1940-4ebc-ba30-2e16f468e3f0" (UID: "832039c9-1940-4ebc-ba30-2e16f468e3f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.367513 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832039c9-1940-4ebc-ba30-2e16f468e3f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.367546 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/832039c9-1940-4ebc-ba30-2e16f468e3f0-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.367561 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkpp7\" (UniqueName: \"kubernetes.io/projected/832039c9-1940-4ebc-ba30-2e16f468e3f0-kube-api-access-hkpp7\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.410470 4981 generic.go:334] "Generic (PLEG): container finished" podID="832039c9-1940-4ebc-ba30-2e16f468e3f0" containerID="8faf674dfcb6aa7391089710dd0ddb54cd2a3cdc58ff4cc210be1dd525b0b5b6" exitCode=0 Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.410511 4981 generic.go:334] "Generic (PLEG): container finished" podID="832039c9-1940-4ebc-ba30-2e16f468e3f0" containerID="0afd95187056cc95130b5281fa2139c9a3f7db3666a4c389d87d75c269f893ca" exitCode=143 Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.411190 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="073fb193-6587-4c6c-b20d-82a5b3075a20" containerName="kube-state-metrics" containerID="cri-o://290ab13d1e06be99faf848d99a57b28cdaead931e71465163c086723480eca80" gracePeriod=30 Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.412605 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"832039c9-1940-4ebc-ba30-2e16f468e3f0","Type":"ContainerDied","Data":"8faf674dfcb6aa7391089710dd0ddb54cd2a3cdc58ff4cc210be1dd525b0b5b6"} Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.412715 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"832039c9-1940-4ebc-ba30-2e16f468e3f0","Type":"ContainerDied","Data":"0afd95187056cc95130b5281fa2139c9a3f7db3666a4c389d87d75c269f893ca"} Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.412765 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"832039c9-1940-4ebc-ba30-2e16f468e3f0","Type":"ContainerDied","Data":"9a1dd136b80d15fe871204f78373400887f64abe87bcd7cacdc09024774c0f84"} Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.412815 4981 scope.go:117] "RemoveContainer" containerID="8faf674dfcb6aa7391089710dd0ddb54cd2a3cdc58ff4cc210be1dd525b0b5b6" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.413203 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.456163 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.457120 4981 scope.go:117] "RemoveContainer" containerID="0afd95187056cc95130b5281fa2139c9a3f7db3666a4c389d87d75c269f893ca" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.470296 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.483802 4981 scope.go:117] "RemoveContainer" containerID="8faf674dfcb6aa7391089710dd0ddb54cd2a3cdc58ff4cc210be1dd525b0b5b6" Feb 27 19:18:56 crc kubenswrapper[4981]: E0227 19:18:56.486333 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8faf674dfcb6aa7391089710dd0ddb54cd2a3cdc58ff4cc210be1dd525b0b5b6\": container with ID starting with 8faf674dfcb6aa7391089710dd0ddb54cd2a3cdc58ff4cc210be1dd525b0b5b6 not found: ID does not exist" containerID="8faf674dfcb6aa7391089710dd0ddb54cd2a3cdc58ff4cc210be1dd525b0b5b6" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.486559 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8faf674dfcb6aa7391089710dd0ddb54cd2a3cdc58ff4cc210be1dd525b0b5b6"} err="failed to get container status \"8faf674dfcb6aa7391089710dd0ddb54cd2a3cdc58ff4cc210be1dd525b0b5b6\": rpc error: code = NotFound desc = could not find container \"8faf674dfcb6aa7391089710dd0ddb54cd2a3cdc58ff4cc210be1dd525b0b5b6\": container with ID starting with 8faf674dfcb6aa7391089710dd0ddb54cd2a3cdc58ff4cc210be1dd525b0b5b6 not found: ID does not exist" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.486752 4981 scope.go:117] "RemoveContainer" containerID="0afd95187056cc95130b5281fa2139c9a3f7db3666a4c389d87d75c269f893ca" Feb 27 19:18:56 crc kubenswrapper[4981]: E0227 19:18:56.487170 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0afd95187056cc95130b5281fa2139c9a3f7db3666a4c389d87d75c269f893ca\": container with ID starting with 0afd95187056cc95130b5281fa2139c9a3f7db3666a4c389d87d75c269f893ca not found: ID does not exist" containerID="0afd95187056cc95130b5281fa2139c9a3f7db3666a4c389d87d75c269f893ca" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.487286 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0afd95187056cc95130b5281fa2139c9a3f7db3666a4c389d87d75c269f893ca"} err="failed to get container status \"0afd95187056cc95130b5281fa2139c9a3f7db3666a4c389d87d75c269f893ca\": rpc error: code = NotFound desc = could not find container \"0afd95187056cc95130b5281fa2139c9a3f7db3666a4c389d87d75c269f893ca\": container with ID starting with 0afd95187056cc95130b5281fa2139c9a3f7db3666a4c389d87d75c269f893ca not found: ID does not exist" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.487400 4981 scope.go:117] "RemoveContainer" containerID="8faf674dfcb6aa7391089710dd0ddb54cd2a3cdc58ff4cc210be1dd525b0b5b6" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.487710 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8faf674dfcb6aa7391089710dd0ddb54cd2a3cdc58ff4cc210be1dd525b0b5b6"} err="failed to get container status \"8faf674dfcb6aa7391089710dd0ddb54cd2a3cdc58ff4cc210be1dd525b0b5b6\": rpc error: code = NotFound desc = could not find container \"8faf674dfcb6aa7391089710dd0ddb54cd2a3cdc58ff4cc210be1dd525b0b5b6\": container with ID starting with 8faf674dfcb6aa7391089710dd0ddb54cd2a3cdc58ff4cc210be1dd525b0b5b6 not found: ID does not exist" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.487819 4981 scope.go:117] "RemoveContainer" containerID="0afd95187056cc95130b5281fa2139c9a3f7db3666a4c389d87d75c269f893ca" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.488178 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0afd95187056cc95130b5281fa2139c9a3f7db3666a4c389d87d75c269f893ca"} err="failed to get container status \"0afd95187056cc95130b5281fa2139c9a3f7db3666a4c389d87d75c269f893ca\": rpc error: code = NotFound desc = could not find container \"0afd95187056cc95130b5281fa2139c9a3f7db3666a4c389d87d75c269f893ca\": container with ID starting with 0afd95187056cc95130b5281fa2139c9a3f7db3666a4c389d87d75c269f893ca not found: ID does not exist" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.515908 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:18:56 crc kubenswrapper[4981]: E0227 19:18:56.518965 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832039c9-1940-4ebc-ba30-2e16f468e3f0" containerName="nova-metadata-metadata" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.518984 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="832039c9-1940-4ebc-ba30-2e16f468e3f0" containerName="nova-metadata-metadata" Feb 27 19:18:56 crc kubenswrapper[4981]: E0227 19:18:56.519022 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832039c9-1940-4ebc-ba30-2e16f468e3f0" containerName="nova-metadata-log" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.519031 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="832039c9-1940-4ebc-ba30-2e16f468e3f0" containerName="nova-metadata-log" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.519393 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="832039c9-1940-4ebc-ba30-2e16f468e3f0" containerName="nova-metadata-log" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.519449 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="832039c9-1940-4ebc-ba30-2e16f468e3f0" containerName="nova-metadata-metadata" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.520907 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.528835 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.528853 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.549053 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.579669 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\") " pod="openstack/nova-metadata-0" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.579716 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-logs\") pod \"nova-metadata-0\" (UID: \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\") " pod="openstack/nova-metadata-0" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.579759 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6ph4\" (UniqueName: \"kubernetes.io/projected/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-kube-api-access-f6ph4\") pod \"nova-metadata-0\" (UID: \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\") " pod="openstack/nova-metadata-0" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.579847 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-config-data\") pod \"nova-metadata-0\" (UID: \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\") " pod="openstack/nova-metadata-0" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.579876 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\") " pod="openstack/nova-metadata-0" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.681518 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\") " pod="openstack/nova-metadata-0" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.681565 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-logs\") pod \"nova-metadata-0\" (UID: \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\") " pod="openstack/nova-metadata-0" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.681606 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6ph4\" (UniqueName: \"kubernetes.io/projected/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-kube-api-access-f6ph4\") pod \"nova-metadata-0\" (UID: \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\") " pod="openstack/nova-metadata-0" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.681698 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-config-data\") pod \"nova-metadata-0\" (UID: \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\") " pod="openstack/nova-metadata-0" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.681725 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\") " pod="openstack/nova-metadata-0" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.683129 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-logs\") pod \"nova-metadata-0\" (UID: \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\") " pod="openstack/nova-metadata-0" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.685691 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-config-data\") pod \"nova-metadata-0\" (UID: \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\") " pod="openstack/nova-metadata-0" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.685743 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\") " pod="openstack/nova-metadata-0" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.688499 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\") " pod="openstack/nova-metadata-0" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.698157 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6ph4\" (UniqueName: \"kubernetes.io/projected/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-kube-api-access-f6ph4\") pod \"nova-metadata-0\" (UID: \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\") " pod="openstack/nova-metadata-0" Feb 27 19:18:56 crc kubenswrapper[4981]: I0227 19:18:56.959142 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 19:18:57 crc kubenswrapper[4981]: I0227 19:18:57.422403 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:18:57 crc kubenswrapper[4981]: I0227 19:18:57.440239 4981 generic.go:334] "Generic (PLEG): container finished" podID="073fb193-6587-4c6c-b20d-82a5b3075a20" containerID="290ab13d1e06be99faf848d99a57b28cdaead931e71465163c086723480eca80" exitCode=2 Feb 27 19:18:57 crc kubenswrapper[4981]: I0227 19:18:57.440379 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"073fb193-6587-4c6c-b20d-82a5b3075a20","Type":"ContainerDied","Data":"290ab13d1e06be99faf848d99a57b28cdaead931e71465163c086723480eca80"} Feb 27 19:18:57 crc kubenswrapper[4981]: I0227 19:18:57.544640 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:18:57 crc kubenswrapper[4981]: I0227 19:18:57.545403 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4f1f0012-8326-4378-9870-0da9e2128a42" containerName="sg-core" containerID="cri-o://b1b4d22213f44911ebe10a3402ffc2fe9e45bf83ac72fb843b704b20dfc45c5d" gracePeriod=30 Feb 27 19:18:57 crc kubenswrapper[4981]: I0227 19:18:57.545435 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4f1f0012-8326-4378-9870-0da9e2128a42" containerName="ceilometer-notification-agent" containerID="cri-o://39a33d7f90b81049a430bd8fb87e2b3fa095391998769fa7d5f18caca0535e5d" gracePeriod=30 Feb 27 19:18:57 crc kubenswrapper[4981]: I0227 19:18:57.545539 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4f1f0012-8326-4378-9870-0da9e2128a42" containerName="proxy-httpd" containerID="cri-o://683781b1d22b4e5d94316ec9c2e6b71289420b5a72944d06a1be1ebb8a8c1acc" gracePeriod=30 Feb 27 19:18:57 crc kubenswrapper[4981]: I0227 19:18:57.545282 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4f1f0012-8326-4378-9870-0da9e2128a42" containerName="ceilometer-central-agent" containerID="cri-o://cf37390c0d281caea3ea0a3e13a5ad34f321561bf275a4b694e4c3e1b563144d" gracePeriod=30 Feb 27 19:18:57 crc kubenswrapper[4981]: I0227 19:18:57.598196 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 19:18:57 crc kubenswrapper[4981]: I0227 19:18:57.645590 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="832039c9-1940-4ebc-ba30-2e16f468e3f0" path="/var/lib/kubelet/pods/832039c9-1940-4ebc-ba30-2e16f468e3f0/volumes" Feb 27 19:18:57 crc kubenswrapper[4981]: I0227 19:18:57.702197 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gcsf\" (UniqueName: \"kubernetes.io/projected/073fb193-6587-4c6c-b20d-82a5b3075a20-kube-api-access-9gcsf\") pod \"073fb193-6587-4c6c-b20d-82a5b3075a20\" (UID: \"073fb193-6587-4c6c-b20d-82a5b3075a20\") " Feb 27 19:18:57 crc kubenswrapper[4981]: I0227 19:18:57.709472 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/073fb193-6587-4c6c-b20d-82a5b3075a20-kube-api-access-9gcsf" (OuterVolumeSpecName: "kube-api-access-9gcsf") pod "073fb193-6587-4c6c-b20d-82a5b3075a20" (UID: "073fb193-6587-4c6c-b20d-82a5b3075a20"). InnerVolumeSpecName "kube-api-access-9gcsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:18:57 crc kubenswrapper[4981]: I0227 19:18:57.804924 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gcsf\" (UniqueName: \"kubernetes.io/projected/073fb193-6587-4c6c-b20d-82a5b3075a20-kube-api-access-9gcsf\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.451176 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"073fb193-6587-4c6c-b20d-82a5b3075a20","Type":"ContainerDied","Data":"5f7e3ee9635e10d6c0b683ebb18e8767a18363635bd46a2bf65f017b1428a0b2"} Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.453321 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"948b2b0a-6ee4-422a-8f8f-ba4271a94c61","Type":"ContainerStarted","Data":"5495e6bfbdc4e38a6bec4b37576526e785cb825c8afc446efcd0265964e21b1d"} Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.453427 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"948b2b0a-6ee4-422a-8f8f-ba4271a94c61","Type":"ContainerStarted","Data":"a298c72fbeb0637e031906fb511517a77777d7b86f4004d01b045a4172b13fc1"} Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.453550 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"948b2b0a-6ee4-422a-8f8f-ba4271a94c61","Type":"ContainerStarted","Data":"c758e6c1503c3e6aac7843a82fe888ed0594f4bbd79865f4499914193e254efe"} Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.452505 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.453381 4981 scope.go:117] "RemoveContainer" containerID="290ab13d1e06be99faf848d99a57b28cdaead931e71465163c086723480eca80" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.457740 4981 generic.go:334] "Generic (PLEG): container finished" podID="4f1f0012-8326-4378-9870-0da9e2128a42" containerID="683781b1d22b4e5d94316ec9c2e6b71289420b5a72944d06a1be1ebb8a8c1acc" exitCode=0 Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.457767 4981 generic.go:334] "Generic (PLEG): container finished" podID="4f1f0012-8326-4378-9870-0da9e2128a42" containerID="b1b4d22213f44911ebe10a3402ffc2fe9e45bf83ac72fb843b704b20dfc45c5d" exitCode=2 Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.457776 4981 generic.go:334] "Generic (PLEG): container finished" podID="4f1f0012-8326-4378-9870-0da9e2128a42" containerID="cf37390c0d281caea3ea0a3e13a5ad34f321561bf275a4b694e4c3e1b563144d" exitCode=0 Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.457796 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f1f0012-8326-4378-9870-0da9e2128a42","Type":"ContainerDied","Data":"683781b1d22b4e5d94316ec9c2e6b71289420b5a72944d06a1be1ebb8a8c1acc"} Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.457813 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f1f0012-8326-4378-9870-0da9e2128a42","Type":"ContainerDied","Data":"b1b4d22213f44911ebe10a3402ffc2fe9e45bf83ac72fb843b704b20dfc45c5d"} Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.457824 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f1f0012-8326-4378-9870-0da9e2128a42","Type":"ContainerDied","Data":"cf37390c0d281caea3ea0a3e13a5ad34f321561bf275a4b694e4c3e1b563144d"} Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.498963 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.498928781 podStartE2EDuration="2.498928781s" podCreationTimestamp="2026-02-27 19:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:18:58.486613593 +0000 UTC m=+2037.965394753" watchObservedRunningTime="2026-02-27 19:18:58.498928781 +0000 UTC m=+2037.977709961" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.539858 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.555897 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.578443 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 19:18:58 crc kubenswrapper[4981]: E0227 19:18:58.584233 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="073fb193-6587-4c6c-b20d-82a5b3075a20" containerName="kube-state-metrics" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.584418 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="073fb193-6587-4c6c-b20d-82a5b3075a20" containerName="kube-state-metrics" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.584793 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="073fb193-6587-4c6c-b20d-82a5b3075a20" containerName="kube-state-metrics" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.586052 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.588901 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.589136 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.606698 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.630574 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89937a2b-e16c-4964-a540-5a2f8fe812b7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"89937a2b-e16c-4964-a540-5a2f8fe812b7\") " pod="openstack/kube-state-metrics-0" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.630694 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/89937a2b-e16c-4964-a540-5a2f8fe812b7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"89937a2b-e16c-4964-a540-5a2f8fe812b7\") " pod="openstack/kube-state-metrics-0" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.630825 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/89937a2b-e16c-4964-a540-5a2f8fe812b7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"89937a2b-e16c-4964-a540-5a2f8fe812b7\") " pod="openstack/kube-state-metrics-0" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.631667 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4mz7\" (UniqueName: \"kubernetes.io/projected/89937a2b-e16c-4964-a540-5a2f8fe812b7-kube-api-access-l4mz7\") pod \"kube-state-metrics-0\" (UID: \"89937a2b-e16c-4964-a540-5a2f8fe812b7\") " pod="openstack/kube-state-metrics-0" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.734158 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89937a2b-e16c-4964-a540-5a2f8fe812b7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"89937a2b-e16c-4964-a540-5a2f8fe812b7\") " pod="openstack/kube-state-metrics-0" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.736308 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/89937a2b-e16c-4964-a540-5a2f8fe812b7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"89937a2b-e16c-4964-a540-5a2f8fe812b7\") " pod="openstack/kube-state-metrics-0" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.736503 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/89937a2b-e16c-4964-a540-5a2f8fe812b7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"89937a2b-e16c-4964-a540-5a2f8fe812b7\") " pod="openstack/kube-state-metrics-0" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.737135 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4mz7\" (UniqueName: \"kubernetes.io/projected/89937a2b-e16c-4964-a540-5a2f8fe812b7-kube-api-access-l4mz7\") pod \"kube-state-metrics-0\" (UID: \"89937a2b-e16c-4964-a540-5a2f8fe812b7\") " pod="openstack/kube-state-metrics-0" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.740713 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/89937a2b-e16c-4964-a540-5a2f8fe812b7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"89937a2b-e16c-4964-a540-5a2f8fe812b7\") " pod="openstack/kube-state-metrics-0" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.741532 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89937a2b-e16c-4964-a540-5a2f8fe812b7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"89937a2b-e16c-4964-a540-5a2f8fe812b7\") " pod="openstack/kube-state-metrics-0" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.746010 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/89937a2b-e16c-4964-a540-5a2f8fe812b7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"89937a2b-e16c-4964-a540-5a2f8fe812b7\") " pod="openstack/kube-state-metrics-0" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.757470 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4mz7\" (UniqueName: \"kubernetes.io/projected/89937a2b-e16c-4964-a540-5a2f8fe812b7-kube-api-access-l4mz7\") pod \"kube-state-metrics-0\" (UID: \"89937a2b-e16c-4964-a540-5a2f8fe812b7\") " pod="openstack/kube-state-metrics-0" Feb 27 19:18:58 crc kubenswrapper[4981]: I0227 19:18:58.905840 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 19:18:59 crc kubenswrapper[4981]: I0227 19:18:59.382089 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 19:18:59 crc kubenswrapper[4981]: W0227 19:18:59.388454 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89937a2b_e16c_4964_a540_5a2f8fe812b7.slice/crio-9e003c62fdbf646c5dc7afe3de60d0dd0ac04afdf3bde31e15f44ec669421042 WatchSource:0}: Error finding container 9e003c62fdbf646c5dc7afe3de60d0dd0ac04afdf3bde31e15f44ec669421042: Status 404 returned error can't find the container with id 9e003c62fdbf646c5dc7afe3de60d0dd0ac04afdf3bde31e15f44ec669421042 Feb 27 19:18:59 crc kubenswrapper[4981]: I0227 19:18:59.469668 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"89937a2b-e16c-4964-a540-5a2f8fe812b7","Type":"ContainerStarted","Data":"9e003c62fdbf646c5dc7afe3de60d0dd0ac04afdf3bde31e15f44ec669421042"} Feb 27 19:18:59 crc kubenswrapper[4981]: I0227 19:18:59.640348 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="073fb193-6587-4c6c-b20d-82a5b3075a20" path="/var/lib/kubelet/pods/073fb193-6587-4c6c-b20d-82a5b3075a20/volumes" Feb 27 19:18:59 crc kubenswrapper[4981]: I0227 19:18:59.827295 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:18:59 crc kubenswrapper[4981]: I0227 19:18:59.908478 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vx98x"] Feb 27 19:18:59 crc kubenswrapper[4981]: I0227 19:18:59.908731 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" podUID="2d817344-b2eb-45f6-a948-0e530172230e" containerName="dnsmasq-dns" containerID="cri-o://b038edbc7efa55a4e22ca7d81df3fa8f8ff302284f1571f439df9fd91c23ed20" gracePeriod=10 Feb 27 19:18:59 crc kubenswrapper[4981]: I0227 19:18:59.978858 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 19:18:59 crc kubenswrapper[4981]: I0227 19:18:59.979020 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 19:18:59 crc kubenswrapper[4981]: I0227 19:18:59.991297 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.028613 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.445782 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.472162 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-dns-svc\") pod \"2d817344-b2eb-45f6-a948-0e530172230e\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.472230 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-config\") pod \"2d817344-b2eb-45f6-a948-0e530172230e\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.472286 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-ovsdbserver-sb\") pod \"2d817344-b2eb-45f6-a948-0e530172230e\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.472337 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-ovsdbserver-nb\") pod \"2d817344-b2eb-45f6-a948-0e530172230e\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.472401 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncxmp\" (UniqueName: \"kubernetes.io/projected/2d817344-b2eb-45f6-a948-0e530172230e-kube-api-access-ncxmp\") pod \"2d817344-b2eb-45f6-a948-0e530172230e\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.472430 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-dns-swift-storage-0\") pod \"2d817344-b2eb-45f6-a948-0e530172230e\" (UID: \"2d817344-b2eb-45f6-a948-0e530172230e\") " Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.572553 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d817344-b2eb-45f6-a948-0e530172230e-kube-api-access-ncxmp" (OuterVolumeSpecName: "kube-api-access-ncxmp") pod "2d817344-b2eb-45f6-a948-0e530172230e" (UID: "2d817344-b2eb-45f6-a948-0e530172230e"). InnerVolumeSpecName "kube-api-access-ncxmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.574964 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncxmp\" (UniqueName: \"kubernetes.io/projected/2d817344-b2eb-45f6-a948-0e530172230e-kube-api-access-ncxmp\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.595578 4981 generic.go:334] "Generic (PLEG): container finished" podID="2d817344-b2eb-45f6-a948-0e530172230e" containerID="b038edbc7efa55a4e22ca7d81df3fa8f8ff302284f1571f439df9fd91c23ed20" exitCode=0 Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.595672 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" event={"ID":"2d817344-b2eb-45f6-a948-0e530172230e","Type":"ContainerDied","Data":"b038edbc7efa55a4e22ca7d81df3fa8f8ff302284f1571f439df9fd91c23ed20"} Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.595698 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" event={"ID":"2d817344-b2eb-45f6-a948-0e530172230e","Type":"ContainerDied","Data":"3d9d0e47ff79c40b30c6325c775257182e7df297271d554dfdfe6fa50a8c43c8"} Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.595713 4981 scope.go:117] "RemoveContainer" containerID="b038edbc7efa55a4e22ca7d81df3fa8f8ff302284f1571f439df9fd91c23ed20" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.595875 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-vx98x" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.644237 4981 generic.go:334] "Generic (PLEG): container finished" podID="25ef4760-0e11-422c-b084-afe3d47fbdac" containerID="37d59483cf58af393b8691733cb08b1211746466334fa7c73d6a0931c5a94550" exitCode=0 Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.644347 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k82mt" event={"ID":"25ef4760-0e11-422c-b084-afe3d47fbdac","Type":"ContainerDied","Data":"37d59483cf58af393b8691733cb08b1211746466334fa7c73d6a0931c5a94550"} Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.677654 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2d817344-b2eb-45f6-a948-0e530172230e" (UID: "2d817344-b2eb-45f6-a948-0e530172230e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.689657 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-config" (OuterVolumeSpecName: "config") pod "2d817344-b2eb-45f6-a948-0e530172230e" (UID: "2d817344-b2eb-45f6-a948-0e530172230e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.706865 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"89937a2b-e16c-4964-a540-5a2f8fe812b7","Type":"ContainerStarted","Data":"6b8b8b2fc415f51817d8b6a2764a8f9e401f55ae49e1275fbc6feb278754299e"} Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.707181 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.718448 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2d817344-b2eb-45f6-a948-0e530172230e" (UID: "2d817344-b2eb-45f6-a948-0e530172230e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.724232 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2d817344-b2eb-45f6-a948-0e530172230e" (UID: "2d817344-b2eb-45f6-a948-0e530172230e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.745989 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.329652688 podStartE2EDuration="2.745967747s" podCreationTimestamp="2026-02-27 19:18:58 +0000 UTC" firstStartedPulling="2026-02-27 19:18:59.391392237 +0000 UTC m=+2038.870173397" lastFinishedPulling="2026-02-27 19:18:59.807707296 +0000 UTC m=+2039.286488456" observedRunningTime="2026-02-27 19:19:00.744454801 +0000 UTC m=+2040.223235961" watchObservedRunningTime="2026-02-27 19:19:00.745967747 +0000 UTC m=+2040.224748907" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.751318 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.757160 4981 scope.go:117] "RemoveContainer" containerID="12c8150f01b22482f259335ce330f0dce75e1c781aaaf7e1683e273b1df60083" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.780588 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.780864 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.780944 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.781073 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.787418 4981 scope.go:117] "RemoveContainer" containerID="b038edbc7efa55a4e22ca7d81df3fa8f8ff302284f1571f439df9fd91c23ed20" Feb 27 19:19:00 crc kubenswrapper[4981]: E0227 19:19:00.787926 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b038edbc7efa55a4e22ca7d81df3fa8f8ff302284f1571f439df9fd91c23ed20\": container with ID starting with b038edbc7efa55a4e22ca7d81df3fa8f8ff302284f1571f439df9fd91c23ed20 not found: ID does not exist" containerID="b038edbc7efa55a4e22ca7d81df3fa8f8ff302284f1571f439df9fd91c23ed20" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.787957 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b038edbc7efa55a4e22ca7d81df3fa8f8ff302284f1571f439df9fd91c23ed20"} err="failed to get container status \"b038edbc7efa55a4e22ca7d81df3fa8f8ff302284f1571f439df9fd91c23ed20\": rpc error: code = NotFound desc = could not find container \"b038edbc7efa55a4e22ca7d81df3fa8f8ff302284f1571f439df9fd91c23ed20\": container with ID starting with b038edbc7efa55a4e22ca7d81df3fa8f8ff302284f1571f439df9fd91c23ed20 not found: ID does not exist" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.787986 4981 scope.go:117] "RemoveContainer" containerID="12c8150f01b22482f259335ce330f0dce75e1c781aaaf7e1683e273b1df60083" Feb 27 19:19:00 crc kubenswrapper[4981]: E0227 19:19:00.788238 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12c8150f01b22482f259335ce330f0dce75e1c781aaaf7e1683e273b1df60083\": container with ID starting with 12c8150f01b22482f259335ce330f0dce75e1c781aaaf7e1683e273b1df60083 not found: ID does not exist" containerID="12c8150f01b22482f259335ce330f0dce75e1c781aaaf7e1683e273b1df60083" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.788255 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12c8150f01b22482f259335ce330f0dce75e1c781aaaf7e1683e273b1df60083"} err="failed to get container status \"12c8150f01b22482f259335ce330f0dce75e1c781aaaf7e1683e273b1df60083\": rpc error: code = NotFound desc = could not find container \"12c8150f01b22482f259335ce330f0dce75e1c781aaaf7e1683e273b1df60083\": container with ID starting with 12c8150f01b22482f259335ce330f0dce75e1c781aaaf7e1683e273b1df60083 not found: ID does not exist" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.794297 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2d817344-b2eb-45f6-a948-0e530172230e" (UID: "2d817344-b2eb-45f6-a948-0e530172230e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.882727 4981 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d817344-b2eb-45f6-a948-0e530172230e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.948431 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vx98x"] Feb 27 19:19:00 crc kubenswrapper[4981]: I0227 19:19:00.956611 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-vx98x"] Feb 27 19:19:01 crc kubenswrapper[4981]: I0227 19:19:01.020408 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 19:19:01 crc kubenswrapper[4981]: I0227 19:19:01.020408 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 19:19:01 crc kubenswrapper[4981]: I0227 19:19:01.641074 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d817344-b2eb-45f6-a948-0e530172230e" path="/var/lib/kubelet/pods/2d817344-b2eb-45f6-a948-0e530172230e/volumes" Feb 27 19:19:01 crc kubenswrapper[4981]: I0227 19:19:01.720429 4981 generic.go:334] "Generic (PLEG): container finished" podID="f1cb5d15-1a22-4c56-a028-11eb02f9e043" containerID="007fd137645033b11c92c0c281ae4b91b4c7023beffad98c9d943dcce0e7b915" exitCode=0 Feb 27 19:19:01 crc kubenswrapper[4981]: I0227 19:19:01.720577 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9sxrp" event={"ID":"f1cb5d15-1a22-4c56-a028-11eb02f9e043","Type":"ContainerDied","Data":"007fd137645033b11c92c0c281ae4b91b4c7023beffad98c9d943dcce0e7b915"} Feb 27 19:19:01 crc kubenswrapper[4981]: I0227 19:19:01.959889 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 19:19:01 crc kubenswrapper[4981]: I0227 19:19:01.961486 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.165788 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k82mt" Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.316632 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-scripts\") pod \"25ef4760-0e11-422c-b084-afe3d47fbdac\" (UID: \"25ef4760-0e11-422c-b084-afe3d47fbdac\") " Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.316740 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-config-data\") pod \"25ef4760-0e11-422c-b084-afe3d47fbdac\" (UID: \"25ef4760-0e11-422c-b084-afe3d47fbdac\") " Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.316775 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-combined-ca-bundle\") pod \"25ef4760-0e11-422c-b084-afe3d47fbdac\" (UID: \"25ef4760-0e11-422c-b084-afe3d47fbdac\") " Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.316860 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dfdb\" (UniqueName: \"kubernetes.io/projected/25ef4760-0e11-422c-b084-afe3d47fbdac-kube-api-access-9dfdb\") pod \"25ef4760-0e11-422c-b084-afe3d47fbdac\" (UID: \"25ef4760-0e11-422c-b084-afe3d47fbdac\") " Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.323753 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ef4760-0e11-422c-b084-afe3d47fbdac-kube-api-access-9dfdb" (OuterVolumeSpecName: "kube-api-access-9dfdb") pod "25ef4760-0e11-422c-b084-afe3d47fbdac" (UID: "25ef4760-0e11-422c-b084-afe3d47fbdac"). InnerVolumeSpecName "kube-api-access-9dfdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.327296 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-scripts" (OuterVolumeSpecName: "scripts") pod "25ef4760-0e11-422c-b084-afe3d47fbdac" (UID: "25ef4760-0e11-422c-b084-afe3d47fbdac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:02 crc kubenswrapper[4981]: E0227 19:19:02.342331 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-config-data podName:25ef4760-0e11-422c-b084-afe3d47fbdac nodeName:}" failed. No retries permitted until 2026-02-27 19:19:02.842268342 +0000 UTC m=+2042.321049502 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-config-data") pod "25ef4760-0e11-422c-b084-afe3d47fbdac" (UID: "25ef4760-0e11-422c-b084-afe3d47fbdac") : error deleting /var/lib/kubelet/pods/25ef4760-0e11-422c-b084-afe3d47fbdac/volume-subpaths: remove /var/lib/kubelet/pods/25ef4760-0e11-422c-b084-afe3d47fbdac/volume-subpaths: no such file or directory Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.345823 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25ef4760-0e11-422c-b084-afe3d47fbdac" (UID: "25ef4760-0e11-422c-b084-afe3d47fbdac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.422298 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dfdb\" (UniqueName: \"kubernetes.io/projected/25ef4760-0e11-422c-b084-afe3d47fbdac-kube-api-access-9dfdb\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.422338 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.422355 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.731942 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k82mt" Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.732799 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k82mt" event={"ID":"25ef4760-0e11-422c-b084-afe3d47fbdac","Type":"ContainerDied","Data":"b0c8e285ae1569b300ebacf2189587bb42a6a10e52b2e618e43c44da8ea7c887"} Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.732826 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0c8e285ae1569b300ebacf2189587bb42a6a10e52b2e618e43c44da8ea7c887" Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.880961 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.881591 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8" containerName="nova-api-log" containerID="cri-o://d97f9378864dc341a7cf6aac96c7d09b250d90254a151f0c12b37f2f1de3fe5c" gracePeriod=30 Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.881763 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8" containerName="nova-api-api" containerID="cri-o://015f9b1ec9b883b36119c3474e2c485e3b8c21b974226a67510d92408f36a1ce" gracePeriod=30 Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.932514 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.934518 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-config-data\") pod \"25ef4760-0e11-422c-b084-afe3d47fbdac\" (UID: \"25ef4760-0e11-422c-b084-afe3d47fbdac\") " Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.942883 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.943147 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="770989f6-5783-4874-96fd-6fc1a6ea0757" containerName="nova-scheduler-scheduler" containerID="cri-o://041e2383b18a3d3d9bc0cb4291a7cc0041aaec7f99f62945dc6f15a8a8d352b1" gracePeriod=30 Feb 27 19:19:02 crc kubenswrapper[4981]: I0227 19:19:02.960287 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-config-data" (OuterVolumeSpecName: "config-data") pod "25ef4760-0e11-422c-b084-afe3d47fbdac" (UID: "25ef4760-0e11-422c-b084-afe3d47fbdac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.036503 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ef4760-0e11-422c-b084-afe3d47fbdac-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.099040 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9sxrp" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.238953 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1cb5d15-1a22-4c56-a028-11eb02f9e043-scripts\") pod \"f1cb5d15-1a22-4c56-a028-11eb02f9e043\" (UID: \"f1cb5d15-1a22-4c56-a028-11eb02f9e043\") " Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.239046 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1cb5d15-1a22-4c56-a028-11eb02f9e043-combined-ca-bundle\") pod \"f1cb5d15-1a22-4c56-a028-11eb02f9e043\" (UID: \"f1cb5d15-1a22-4c56-a028-11eb02f9e043\") " Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.239334 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1cb5d15-1a22-4c56-a028-11eb02f9e043-config-data\") pod \"f1cb5d15-1a22-4c56-a028-11eb02f9e043\" (UID: \"f1cb5d15-1a22-4c56-a028-11eb02f9e043\") " Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.239373 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92rl8\" (UniqueName: \"kubernetes.io/projected/f1cb5d15-1a22-4c56-a028-11eb02f9e043-kube-api-access-92rl8\") pod \"f1cb5d15-1a22-4c56-a028-11eb02f9e043\" (UID: \"f1cb5d15-1a22-4c56-a028-11eb02f9e043\") " Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.247812 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1cb5d15-1a22-4c56-a028-11eb02f9e043-kube-api-access-92rl8" (OuterVolumeSpecName: "kube-api-access-92rl8") pod "f1cb5d15-1a22-4c56-a028-11eb02f9e043" (UID: "f1cb5d15-1a22-4c56-a028-11eb02f9e043"). InnerVolumeSpecName "kube-api-access-92rl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.250525 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1cb5d15-1a22-4c56-a028-11eb02f9e043-scripts" (OuterVolumeSpecName: "scripts") pod "f1cb5d15-1a22-4c56-a028-11eb02f9e043" (UID: "f1cb5d15-1a22-4c56-a028-11eb02f9e043"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.268699 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1cb5d15-1a22-4c56-a028-11eb02f9e043-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1cb5d15-1a22-4c56-a028-11eb02f9e043" (UID: "f1cb5d15-1a22-4c56-a028-11eb02f9e043"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.292993 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1cb5d15-1a22-4c56-a028-11eb02f9e043-config-data" (OuterVolumeSpecName: "config-data") pod "f1cb5d15-1a22-4c56-a028-11eb02f9e043" (UID: "f1cb5d15-1a22-4c56-a028-11eb02f9e043"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.341331 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1cb5d15-1a22-4c56-a028-11eb02f9e043-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.341371 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1cb5d15-1a22-4c56-a028-11eb02f9e043-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.341386 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1cb5d15-1a22-4c56-a028-11eb02f9e043-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.341397 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92rl8\" (UniqueName: \"kubernetes.io/projected/f1cb5d15-1a22-4c56-a028-11eb02f9e043-kube-api-access-92rl8\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.578731 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.747277 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-combined-ca-bundle\") pod \"4f1f0012-8326-4378-9870-0da9e2128a42\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.747359 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfqhs\" (UniqueName: \"kubernetes.io/projected/4f1f0012-8326-4378-9870-0da9e2128a42-kube-api-access-kfqhs\") pod \"4f1f0012-8326-4378-9870-0da9e2128a42\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.747393 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f1f0012-8326-4378-9870-0da9e2128a42-log-httpd\") pod \"4f1f0012-8326-4378-9870-0da9e2128a42\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.747429 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-config-data\") pod \"4f1f0012-8326-4378-9870-0da9e2128a42\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.747525 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-scripts\") pod \"4f1f0012-8326-4378-9870-0da9e2128a42\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.747595 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-sg-core-conf-yaml\") pod \"4f1f0012-8326-4378-9870-0da9e2128a42\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.747618 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f1f0012-8326-4378-9870-0da9e2128a42-run-httpd\") pod \"4f1f0012-8326-4378-9870-0da9e2128a42\" (UID: \"4f1f0012-8326-4378-9870-0da9e2128a42\") " Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.749821 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f1f0012-8326-4378-9870-0da9e2128a42-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4f1f0012-8326-4378-9870-0da9e2128a42" (UID: "4f1f0012-8326-4378-9870-0da9e2128a42"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.751942 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f1f0012-8326-4378-9870-0da9e2128a42-kube-api-access-kfqhs" (OuterVolumeSpecName: "kube-api-access-kfqhs") pod "4f1f0012-8326-4378-9870-0da9e2128a42" (UID: "4f1f0012-8326-4378-9870-0da9e2128a42"). InnerVolumeSpecName "kube-api-access-kfqhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.753191 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f1f0012-8326-4378-9870-0da9e2128a42-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4f1f0012-8326-4378-9870-0da9e2128a42" (UID: "4f1f0012-8326-4378-9870-0da9e2128a42"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.753903 4981 generic.go:334] "Generic (PLEG): container finished" podID="7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8" containerID="d97f9378864dc341a7cf6aac96c7d09b250d90254a151f0c12b37f2f1de3fe5c" exitCode=143 Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.753998 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8","Type":"ContainerDied","Data":"d97f9378864dc341a7cf6aac96c7d09b250d90254a151f0c12b37f2f1de3fe5c"} Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.759821 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-scripts" (OuterVolumeSpecName: "scripts") pod "4f1f0012-8326-4378-9870-0da9e2128a42" (UID: "4f1f0012-8326-4378-9870-0da9e2128a42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.764891 4981 generic.go:334] "Generic (PLEG): container finished" podID="4f1f0012-8326-4378-9870-0da9e2128a42" containerID="39a33d7f90b81049a430bd8fb87e2b3fa095391998769fa7d5f18caca0535e5d" exitCode=0 Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.764963 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f1f0012-8326-4378-9870-0da9e2128a42","Type":"ContainerDied","Data":"39a33d7f90b81049a430bd8fb87e2b3fa095391998769fa7d5f18caca0535e5d"} Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.764993 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f1f0012-8326-4378-9870-0da9e2128a42","Type":"ContainerDied","Data":"c5efd250e8704917f562c944f6ec88ce1ff75c5b549530a0b119026a2d8dc084"} Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.765011 4981 scope.go:117] "RemoveContainer" containerID="683781b1d22b4e5d94316ec9c2e6b71289420b5a72944d06a1be1ebb8a8c1acc" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.765135 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.776453 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9sxrp" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.776666 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="948b2b0a-6ee4-422a-8f8f-ba4271a94c61" containerName="nova-metadata-metadata" containerID="cri-o://5495e6bfbdc4e38a6bec4b37576526e785cb825c8afc446efcd0265964e21b1d" gracePeriod=30 Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.776440 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9sxrp" event={"ID":"f1cb5d15-1a22-4c56-a028-11eb02f9e043","Type":"ContainerDied","Data":"6794709cda8cd4fdc67a3918dd7b221d1ce77268656702a18fe63d0515d1562c"} Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.776863 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6794709cda8cd4fdc67a3918dd7b221d1ce77268656702a18fe63d0515d1562c" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.777895 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="948b2b0a-6ee4-422a-8f8f-ba4271a94c61" containerName="nova-metadata-log" containerID="cri-o://a298c72fbeb0637e031906fb511517a77777d7b86f4004d01b045a4172b13fc1" gracePeriod=30 Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.793887 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4f1f0012-8326-4378-9870-0da9e2128a42" (UID: "4f1f0012-8326-4378-9870-0da9e2128a42"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.810346 4981 scope.go:117] "RemoveContainer" containerID="b1b4d22213f44911ebe10a3402ffc2fe9e45bf83ac72fb843b704b20dfc45c5d" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.822245 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 19:19:03 crc kubenswrapper[4981]: E0227 19:19:03.822829 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f1f0012-8326-4378-9870-0da9e2128a42" containerName="sg-core" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.822859 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f1f0012-8326-4378-9870-0da9e2128a42" containerName="sg-core" Feb 27 19:19:03 crc kubenswrapper[4981]: E0227 19:19:03.822877 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f1f0012-8326-4378-9870-0da9e2128a42" containerName="ceilometer-central-agent" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.822885 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f1f0012-8326-4378-9870-0da9e2128a42" containerName="ceilometer-central-agent" Feb 27 19:19:03 crc kubenswrapper[4981]: E0227 19:19:03.822915 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f1f0012-8326-4378-9870-0da9e2128a42" containerName="ceilometer-notification-agent" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.822921 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f1f0012-8326-4378-9870-0da9e2128a42" containerName="ceilometer-notification-agent" Feb 27 19:19:03 crc kubenswrapper[4981]: E0227 19:19:03.822932 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25ef4760-0e11-422c-b084-afe3d47fbdac" containerName="nova-manage" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.822938 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="25ef4760-0e11-422c-b084-afe3d47fbdac" containerName="nova-manage" Feb 27 19:19:03 crc kubenswrapper[4981]: E0227 19:19:03.822950 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d817344-b2eb-45f6-a948-0e530172230e" containerName="dnsmasq-dns" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.822956 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d817344-b2eb-45f6-a948-0e530172230e" containerName="dnsmasq-dns" Feb 27 19:19:03 crc kubenswrapper[4981]: E0227 19:19:03.822967 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d817344-b2eb-45f6-a948-0e530172230e" containerName="init" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.822989 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d817344-b2eb-45f6-a948-0e530172230e" containerName="init" Feb 27 19:19:03 crc kubenswrapper[4981]: E0227 19:19:03.823004 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1cb5d15-1a22-4c56-a028-11eb02f9e043" containerName="nova-cell1-conductor-db-sync" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.823010 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1cb5d15-1a22-4c56-a028-11eb02f9e043" containerName="nova-cell1-conductor-db-sync" Feb 27 19:19:03 crc kubenswrapper[4981]: E0227 19:19:03.823023 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f1f0012-8326-4378-9870-0da9e2128a42" containerName="proxy-httpd" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.823029 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f1f0012-8326-4378-9870-0da9e2128a42" containerName="proxy-httpd" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.823275 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f1f0012-8326-4378-9870-0da9e2128a42" containerName="ceilometer-notification-agent" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.823306 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f1f0012-8326-4378-9870-0da9e2128a42" containerName="ceilometer-central-agent" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.823314 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1cb5d15-1a22-4c56-a028-11eb02f9e043" containerName="nova-cell1-conductor-db-sync" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.823324 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d817344-b2eb-45f6-a948-0e530172230e" containerName="dnsmasq-dns" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.823343 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="25ef4760-0e11-422c-b084-afe3d47fbdac" containerName="nova-manage" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.823355 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f1f0012-8326-4378-9870-0da9e2128a42" containerName="proxy-httpd" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.823364 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f1f0012-8326-4378-9870-0da9e2128a42" containerName="sg-core" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.824301 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.826598 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.849551 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.849579 4981 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.849588 4981 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f1f0012-8326-4378-9870-0da9e2128a42-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.849597 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfqhs\" (UniqueName: \"kubernetes.io/projected/4f1f0012-8326-4378-9870-0da9e2128a42-kube-api-access-kfqhs\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.849605 4981 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f1f0012-8326-4378-9870-0da9e2128a42-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.854740 4981 scope.go:117] "RemoveContainer" containerID="39a33d7f90b81049a430bd8fb87e2b3fa095391998769fa7d5f18caca0535e5d" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.862166 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.867690 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f1f0012-8326-4378-9870-0da9e2128a42" (UID: "4f1f0012-8326-4378-9870-0da9e2128a42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.893466 4981 scope.go:117] "RemoveContainer" containerID="cf37390c0d281caea3ea0a3e13a5ad34f321561bf275a4b694e4c3e1b563144d" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.912370 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-config-data" (OuterVolumeSpecName: "config-data") pod "4f1f0012-8326-4378-9870-0da9e2128a42" (UID: "4f1f0012-8326-4378-9870-0da9e2128a42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.934685 4981 scope.go:117] "RemoveContainer" containerID="683781b1d22b4e5d94316ec9c2e6b71289420b5a72944d06a1be1ebb8a8c1acc" Feb 27 19:19:03 crc kubenswrapper[4981]: E0227 19:19:03.935339 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"683781b1d22b4e5d94316ec9c2e6b71289420b5a72944d06a1be1ebb8a8c1acc\": container with ID starting with 683781b1d22b4e5d94316ec9c2e6b71289420b5a72944d06a1be1ebb8a8c1acc not found: ID does not exist" containerID="683781b1d22b4e5d94316ec9c2e6b71289420b5a72944d06a1be1ebb8a8c1acc" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.935373 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"683781b1d22b4e5d94316ec9c2e6b71289420b5a72944d06a1be1ebb8a8c1acc"} err="failed to get container status \"683781b1d22b4e5d94316ec9c2e6b71289420b5a72944d06a1be1ebb8a8c1acc\": rpc error: code = NotFound desc = could not find container \"683781b1d22b4e5d94316ec9c2e6b71289420b5a72944d06a1be1ebb8a8c1acc\": container with ID starting with 683781b1d22b4e5d94316ec9c2e6b71289420b5a72944d06a1be1ebb8a8c1acc not found: ID does not exist" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.935413 4981 scope.go:117] "RemoveContainer" containerID="b1b4d22213f44911ebe10a3402ffc2fe9e45bf83ac72fb843b704b20dfc45c5d" Feb 27 19:19:03 crc kubenswrapper[4981]: E0227 19:19:03.935593 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1b4d22213f44911ebe10a3402ffc2fe9e45bf83ac72fb843b704b20dfc45c5d\": container with ID starting with b1b4d22213f44911ebe10a3402ffc2fe9e45bf83ac72fb843b704b20dfc45c5d not found: ID does not exist" containerID="b1b4d22213f44911ebe10a3402ffc2fe9e45bf83ac72fb843b704b20dfc45c5d" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.935616 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1b4d22213f44911ebe10a3402ffc2fe9e45bf83ac72fb843b704b20dfc45c5d"} err="failed to get container status \"b1b4d22213f44911ebe10a3402ffc2fe9e45bf83ac72fb843b704b20dfc45c5d\": rpc error: code = NotFound desc = could not find container \"b1b4d22213f44911ebe10a3402ffc2fe9e45bf83ac72fb843b704b20dfc45c5d\": container with ID starting with b1b4d22213f44911ebe10a3402ffc2fe9e45bf83ac72fb843b704b20dfc45c5d not found: ID does not exist" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.935649 4981 scope.go:117] "RemoveContainer" containerID="39a33d7f90b81049a430bd8fb87e2b3fa095391998769fa7d5f18caca0535e5d" Feb 27 19:19:03 crc kubenswrapper[4981]: E0227 19:19:03.935845 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39a33d7f90b81049a430bd8fb87e2b3fa095391998769fa7d5f18caca0535e5d\": container with ID starting with 39a33d7f90b81049a430bd8fb87e2b3fa095391998769fa7d5f18caca0535e5d not found: ID does not exist" containerID="39a33d7f90b81049a430bd8fb87e2b3fa095391998769fa7d5f18caca0535e5d" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.935890 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39a33d7f90b81049a430bd8fb87e2b3fa095391998769fa7d5f18caca0535e5d"} err="failed to get container status \"39a33d7f90b81049a430bd8fb87e2b3fa095391998769fa7d5f18caca0535e5d\": rpc error: code = NotFound desc = could not find container \"39a33d7f90b81049a430bd8fb87e2b3fa095391998769fa7d5f18caca0535e5d\": container with ID starting with 39a33d7f90b81049a430bd8fb87e2b3fa095391998769fa7d5f18caca0535e5d not found: ID does not exist" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.935908 4981 scope.go:117] "RemoveContainer" containerID="cf37390c0d281caea3ea0a3e13a5ad34f321561bf275a4b694e4c3e1b563144d" Feb 27 19:19:03 crc kubenswrapper[4981]: E0227 19:19:03.936160 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf37390c0d281caea3ea0a3e13a5ad34f321561bf275a4b694e4c3e1b563144d\": container with ID starting with cf37390c0d281caea3ea0a3e13a5ad34f321561bf275a4b694e4c3e1b563144d not found: ID does not exist" containerID="cf37390c0d281caea3ea0a3e13a5ad34f321561bf275a4b694e4c3e1b563144d" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.936201 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf37390c0d281caea3ea0a3e13a5ad34f321561bf275a4b694e4c3e1b563144d"} err="failed to get container status \"cf37390c0d281caea3ea0a3e13a5ad34f321561bf275a4b694e4c3e1b563144d\": rpc error: code = NotFound desc = could not find container \"cf37390c0d281caea3ea0a3e13a5ad34f321561bf275a4b694e4c3e1b563144d\": container with ID starting with cf37390c0d281caea3ea0a3e13a5ad34f321561bf275a4b694e4c3e1b563144d not found: ID does not exist" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.953078 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caff730d-9210-4de9-b0f1-997e6f5f16c3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"caff730d-9210-4de9-b0f1-997e6f5f16c3\") " pod="openstack/nova-cell1-conductor-0" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.953126 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pdlf\" (UniqueName: \"kubernetes.io/projected/caff730d-9210-4de9-b0f1-997e6f5f16c3-kube-api-access-5pdlf\") pod \"nova-cell1-conductor-0\" (UID: \"caff730d-9210-4de9-b0f1-997e6f5f16c3\") " pod="openstack/nova-cell1-conductor-0" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.953274 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caff730d-9210-4de9-b0f1-997e6f5f16c3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"caff730d-9210-4de9-b0f1-997e6f5f16c3\") " pod="openstack/nova-cell1-conductor-0" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.953387 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:03 crc kubenswrapper[4981]: I0227 19:19:03.953407 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f1f0012-8326-4378-9870-0da9e2128a42-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.055367 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caff730d-9210-4de9-b0f1-997e6f5f16c3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"caff730d-9210-4de9-b0f1-997e6f5f16c3\") " pod="openstack/nova-cell1-conductor-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.055421 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pdlf\" (UniqueName: \"kubernetes.io/projected/caff730d-9210-4de9-b0f1-997e6f5f16c3-kube-api-access-5pdlf\") pod \"nova-cell1-conductor-0\" (UID: \"caff730d-9210-4de9-b0f1-997e6f5f16c3\") " pod="openstack/nova-cell1-conductor-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.055533 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caff730d-9210-4de9-b0f1-997e6f5f16c3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"caff730d-9210-4de9-b0f1-997e6f5f16c3\") " pod="openstack/nova-cell1-conductor-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.059616 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caff730d-9210-4de9-b0f1-997e6f5f16c3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"caff730d-9210-4de9-b0f1-997e6f5f16c3\") " pod="openstack/nova-cell1-conductor-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.060147 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caff730d-9210-4de9-b0f1-997e6f5f16c3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"caff730d-9210-4de9-b0f1-997e6f5f16c3\") " pod="openstack/nova-cell1-conductor-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.075144 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pdlf\" (UniqueName: \"kubernetes.io/projected/caff730d-9210-4de9-b0f1-997e6f5f16c3-kube-api-access-5pdlf\") pod \"nova-cell1-conductor-0\" (UID: \"caff730d-9210-4de9-b0f1-997e6f5f16c3\") " pod="openstack/nova-cell1-conductor-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.156705 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.176195 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.186626 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.238530 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.257251 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.260378 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.260699 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.262250 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.292899 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.314941 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.364440 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-scripts\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.364487 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.364507 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.364583 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d0dde97-c1b5-4662-91b4-1e38716a6412-run-httpd\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.364715 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-config-data\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.364773 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqpwx\" (UniqueName: \"kubernetes.io/projected/2d0dde97-c1b5-4662-91b4-1e38716a6412-kube-api-access-wqpwx\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.364801 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.364827 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d0dde97-c1b5-4662-91b4-1e38716a6412-log-httpd\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.466455 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6ph4\" (UniqueName: \"kubernetes.io/projected/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-kube-api-access-f6ph4\") pod \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\" (UID: \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\") " Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.466505 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-nova-metadata-tls-certs\") pod \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\" (UID: \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\") " Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.467207 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-logs" (OuterVolumeSpecName: "logs") pod "948b2b0a-6ee4-422a-8f8f-ba4271a94c61" (UID: "948b2b0a-6ee4-422a-8f8f-ba4271a94c61"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.467516 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-logs\") pod \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\" (UID: \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\") " Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.467659 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-combined-ca-bundle\") pod \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\" (UID: \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\") " Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.467728 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-config-data\") pod \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\" (UID: \"948b2b0a-6ee4-422a-8f8f-ba4271a94c61\") " Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.468340 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-scripts\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.468382 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.468447 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.468532 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d0dde97-c1b5-4662-91b4-1e38716a6412-run-httpd\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.468599 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-config-data\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.468752 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqpwx\" (UniqueName: \"kubernetes.io/projected/2d0dde97-c1b5-4662-91b4-1e38716a6412-kube-api-access-wqpwx\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.468795 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.468835 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d0dde97-c1b5-4662-91b4-1e38716a6412-log-httpd\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.468951 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.469406 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d0dde97-c1b5-4662-91b4-1e38716a6412-log-httpd\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.471985 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-kube-api-access-f6ph4" (OuterVolumeSpecName: "kube-api-access-f6ph4") pod "948b2b0a-6ee4-422a-8f8f-ba4271a94c61" (UID: "948b2b0a-6ee4-422a-8f8f-ba4271a94c61"). InnerVolumeSpecName "kube-api-access-f6ph4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.472780 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.474899 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.479379 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d0dde97-c1b5-4662-91b4-1e38716a6412-run-httpd\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.479690 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-scripts\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.481327 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.485181 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-config-data\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.485595 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqpwx\" (UniqueName: \"kubernetes.io/projected/2d0dde97-c1b5-4662-91b4-1e38716a6412-kube-api-access-wqpwx\") pod \"ceilometer-0\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.497162 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "948b2b0a-6ee4-422a-8f8f-ba4271a94c61" (UID: "948b2b0a-6ee4-422a-8f8f-ba4271a94c61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.498180 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-config-data" (OuterVolumeSpecName: "config-data") pod "948b2b0a-6ee4-422a-8f8f-ba4271a94c61" (UID: "948b2b0a-6ee4-422a-8f8f-ba4271a94c61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.511644 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "948b2b0a-6ee4-422a-8f8f-ba4271a94c61" (UID: "948b2b0a-6ee4-422a-8f8f-ba4271a94c61"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.570470 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6ph4\" (UniqueName: \"kubernetes.io/projected/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-kube-api-access-f6ph4\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.570504 4981 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.570516 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.570526 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948b2b0a-6ee4-422a-8f8f-ba4271a94c61-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.606921 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.682613 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 19:19:04 crc kubenswrapper[4981]: W0227 19:19:04.685606 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcaff730d_9210_4de9_b0f1_997e6f5f16c3.slice/crio-5be4e0ee3d2bf6a1d235466cbd5bd8554026f00794bf30f1e9af58276c3a0684 WatchSource:0}: Error finding container 5be4e0ee3d2bf6a1d235466cbd5bd8554026f00794bf30f1e9af58276c3a0684: Status 404 returned error can't find the container with id 5be4e0ee3d2bf6a1d235466cbd5bd8554026f00794bf30f1e9af58276c3a0684 Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.793126 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.793125 4981 generic.go:334] "Generic (PLEG): container finished" podID="948b2b0a-6ee4-422a-8f8f-ba4271a94c61" containerID="5495e6bfbdc4e38a6bec4b37576526e785cb825c8afc446efcd0265964e21b1d" exitCode=0 Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.793392 4981 generic.go:334] "Generic (PLEG): container finished" podID="948b2b0a-6ee4-422a-8f8f-ba4271a94c61" containerID="a298c72fbeb0637e031906fb511517a77777d7b86f4004d01b045a4172b13fc1" exitCode=143 Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.793152 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"948b2b0a-6ee4-422a-8f8f-ba4271a94c61","Type":"ContainerDied","Data":"5495e6bfbdc4e38a6bec4b37576526e785cb825c8afc446efcd0265964e21b1d"} Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.793512 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"948b2b0a-6ee4-422a-8f8f-ba4271a94c61","Type":"ContainerDied","Data":"a298c72fbeb0637e031906fb511517a77777d7b86f4004d01b045a4172b13fc1"} Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.793536 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"948b2b0a-6ee4-422a-8f8f-ba4271a94c61","Type":"ContainerDied","Data":"c758e6c1503c3e6aac7843a82fe888ed0594f4bbd79865f4499914193e254efe"} Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.793557 4981 scope.go:117] "RemoveContainer" containerID="5495e6bfbdc4e38a6bec4b37576526e785cb825c8afc446efcd0265964e21b1d" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.797188 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"caff730d-9210-4de9-b0f1-997e6f5f16c3","Type":"ContainerStarted","Data":"5be4e0ee3d2bf6a1d235466cbd5bd8554026f00794bf30f1e9af58276c3a0684"} Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.823874 4981 scope.go:117] "RemoveContainer" containerID="a298c72fbeb0637e031906fb511517a77777d7b86f4004d01b045a4172b13fc1" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.839375 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.864003 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.879208 4981 scope.go:117] "RemoveContainer" containerID="5495e6bfbdc4e38a6bec4b37576526e785cb825c8afc446efcd0265964e21b1d" Feb 27 19:19:04 crc kubenswrapper[4981]: E0227 19:19:04.880858 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5495e6bfbdc4e38a6bec4b37576526e785cb825c8afc446efcd0265964e21b1d\": container with ID starting with 5495e6bfbdc4e38a6bec4b37576526e785cb825c8afc446efcd0265964e21b1d not found: ID does not exist" containerID="5495e6bfbdc4e38a6bec4b37576526e785cb825c8afc446efcd0265964e21b1d" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.881562 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5495e6bfbdc4e38a6bec4b37576526e785cb825c8afc446efcd0265964e21b1d"} err="failed to get container status \"5495e6bfbdc4e38a6bec4b37576526e785cb825c8afc446efcd0265964e21b1d\": rpc error: code = NotFound desc = could not find container \"5495e6bfbdc4e38a6bec4b37576526e785cb825c8afc446efcd0265964e21b1d\": container with ID starting with 5495e6bfbdc4e38a6bec4b37576526e785cb825c8afc446efcd0265964e21b1d not found: ID does not exist" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.881685 4981 scope.go:117] "RemoveContainer" containerID="a298c72fbeb0637e031906fb511517a77777d7b86f4004d01b045a4172b13fc1" Feb 27 19:19:04 crc kubenswrapper[4981]: E0227 19:19:04.882375 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a298c72fbeb0637e031906fb511517a77777d7b86f4004d01b045a4172b13fc1\": container with ID starting with a298c72fbeb0637e031906fb511517a77777d7b86f4004d01b045a4172b13fc1 not found: ID does not exist" containerID="a298c72fbeb0637e031906fb511517a77777d7b86f4004d01b045a4172b13fc1" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.882410 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a298c72fbeb0637e031906fb511517a77777d7b86f4004d01b045a4172b13fc1"} err="failed to get container status \"a298c72fbeb0637e031906fb511517a77777d7b86f4004d01b045a4172b13fc1\": rpc error: code = NotFound desc = could not find container \"a298c72fbeb0637e031906fb511517a77777d7b86f4004d01b045a4172b13fc1\": container with ID starting with a298c72fbeb0637e031906fb511517a77777d7b86f4004d01b045a4172b13fc1 not found: ID does not exist" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.882431 4981 scope.go:117] "RemoveContainer" containerID="5495e6bfbdc4e38a6bec4b37576526e785cb825c8afc446efcd0265964e21b1d" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.882785 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5495e6bfbdc4e38a6bec4b37576526e785cb825c8afc446efcd0265964e21b1d"} err="failed to get container status \"5495e6bfbdc4e38a6bec4b37576526e785cb825c8afc446efcd0265964e21b1d\": rpc error: code = NotFound desc = could not find container \"5495e6bfbdc4e38a6bec4b37576526e785cb825c8afc446efcd0265964e21b1d\": container with ID starting with 5495e6bfbdc4e38a6bec4b37576526e785cb825c8afc446efcd0265964e21b1d not found: ID does not exist" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.882815 4981 scope.go:117] "RemoveContainer" containerID="a298c72fbeb0637e031906fb511517a77777d7b86f4004d01b045a4172b13fc1" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.883297 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a298c72fbeb0637e031906fb511517a77777d7b86f4004d01b045a4172b13fc1"} err="failed to get container status \"a298c72fbeb0637e031906fb511517a77777d7b86f4004d01b045a4172b13fc1\": rpc error: code = NotFound desc = could not find container \"a298c72fbeb0637e031906fb511517a77777d7b86f4004d01b045a4172b13fc1\": container with ID starting with a298c72fbeb0637e031906fb511517a77777d7b86f4004d01b045a4172b13fc1 not found: ID does not exist" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.886406 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:19:04 crc kubenswrapper[4981]: E0227 19:19:04.887181 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948b2b0a-6ee4-422a-8f8f-ba4271a94c61" containerName="nova-metadata-log" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.887203 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="948b2b0a-6ee4-422a-8f8f-ba4271a94c61" containerName="nova-metadata-log" Feb 27 19:19:04 crc kubenswrapper[4981]: E0227 19:19:04.887474 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948b2b0a-6ee4-422a-8f8f-ba4271a94c61" containerName="nova-metadata-metadata" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.887599 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="948b2b0a-6ee4-422a-8f8f-ba4271a94c61" containerName="nova-metadata-metadata" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.887903 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="948b2b0a-6ee4-422a-8f8f-ba4271a94c61" containerName="nova-metadata-log" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.887932 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="948b2b0a-6ee4-422a-8f8f-ba4271a94c61" containerName="nova-metadata-metadata" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.889348 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.891580 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.891829 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.908713 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.981149 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af9c3f90-c49d-4d3f-9d4a-567f5683434b-logs\") pod \"nova-metadata-0\" (UID: \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\") " pod="openstack/nova-metadata-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.981201 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af9c3f90-c49d-4d3f-9d4a-567f5683434b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\") " pod="openstack/nova-metadata-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.981304 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/af9c3f90-c49d-4d3f-9d4a-567f5683434b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\") " pod="openstack/nova-metadata-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.981329 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2jrf\" (UniqueName: \"kubernetes.io/projected/af9c3f90-c49d-4d3f-9d4a-567f5683434b-kube-api-access-k2jrf\") pod \"nova-metadata-0\" (UID: \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\") " pod="openstack/nova-metadata-0" Feb 27 19:19:04 crc kubenswrapper[4981]: I0227 19:19:04.981346 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af9c3f90-c49d-4d3f-9d4a-567f5683434b-config-data\") pod \"nova-metadata-0\" (UID: \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\") " pod="openstack/nova-metadata-0" Feb 27 19:19:04 crc kubenswrapper[4981]: E0227 19:19:04.992574 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="041e2383b18a3d3d9bc0cb4291a7cc0041aaec7f99f62945dc6f15a8a8d352b1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 19:19:04 crc kubenswrapper[4981]: E0227 19:19:04.993959 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="041e2383b18a3d3d9bc0cb4291a7cc0041aaec7f99f62945dc6f15a8a8d352b1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 19:19:04 crc kubenswrapper[4981]: E0227 19:19:04.995836 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="041e2383b18a3d3d9bc0cb4291a7cc0041aaec7f99f62945dc6f15a8a8d352b1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 19:19:04 crc kubenswrapper[4981]: E0227 19:19:04.995861 4981 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="770989f6-5783-4874-96fd-6fc1a6ea0757" containerName="nova-scheduler-scheduler" Feb 27 19:19:05 crc kubenswrapper[4981]: I0227 19:19:05.083718 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af9c3f90-c49d-4d3f-9d4a-567f5683434b-logs\") pod \"nova-metadata-0\" (UID: \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\") " pod="openstack/nova-metadata-0" Feb 27 19:19:05 crc kubenswrapper[4981]: I0227 19:19:05.084077 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af9c3f90-c49d-4d3f-9d4a-567f5683434b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\") " pod="openstack/nova-metadata-0" Feb 27 19:19:05 crc kubenswrapper[4981]: I0227 19:19:05.084116 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/af9c3f90-c49d-4d3f-9d4a-567f5683434b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\") " pod="openstack/nova-metadata-0" Feb 27 19:19:05 crc kubenswrapper[4981]: I0227 19:19:05.084141 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2jrf\" (UniqueName: \"kubernetes.io/projected/af9c3f90-c49d-4d3f-9d4a-567f5683434b-kube-api-access-k2jrf\") pod \"nova-metadata-0\" (UID: \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\") " pod="openstack/nova-metadata-0" Feb 27 19:19:05 crc kubenswrapper[4981]: I0227 19:19:05.084160 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af9c3f90-c49d-4d3f-9d4a-567f5683434b-config-data\") pod \"nova-metadata-0\" (UID: \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\") " pod="openstack/nova-metadata-0" Feb 27 19:19:05 crc kubenswrapper[4981]: I0227 19:19:05.084329 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af9c3f90-c49d-4d3f-9d4a-567f5683434b-logs\") pod \"nova-metadata-0\" (UID: \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\") " pod="openstack/nova-metadata-0" Feb 27 19:19:05 crc kubenswrapper[4981]: I0227 19:19:05.088395 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af9c3f90-c49d-4d3f-9d4a-567f5683434b-config-data\") pod \"nova-metadata-0\" (UID: \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\") " pod="openstack/nova-metadata-0" Feb 27 19:19:05 crc kubenswrapper[4981]: I0227 19:19:05.094468 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/af9c3f90-c49d-4d3f-9d4a-567f5683434b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\") " pod="openstack/nova-metadata-0" Feb 27 19:19:05 crc kubenswrapper[4981]: I0227 19:19:05.095729 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af9c3f90-c49d-4d3f-9d4a-567f5683434b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\") " pod="openstack/nova-metadata-0" Feb 27 19:19:05 crc kubenswrapper[4981]: I0227 19:19:05.097327 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:19:05 crc kubenswrapper[4981]: W0227 19:19:05.099798 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d0dde97_c1b5_4662_91b4_1e38716a6412.slice/crio-ef5e00afe4841401505d2d44ce843d9dc5952e56d9efb63f89b0c0051bad198e WatchSource:0}: Error finding container ef5e00afe4841401505d2d44ce843d9dc5952e56d9efb63f89b0c0051bad198e: Status 404 returned error can't find the container with id ef5e00afe4841401505d2d44ce843d9dc5952e56d9efb63f89b0c0051bad198e Feb 27 19:19:05 crc kubenswrapper[4981]: I0227 19:19:05.107431 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2jrf\" (UniqueName: \"kubernetes.io/projected/af9c3f90-c49d-4d3f-9d4a-567f5683434b-kube-api-access-k2jrf\") pod \"nova-metadata-0\" (UID: \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\") " pod="openstack/nova-metadata-0" Feb 27 19:19:05 crc kubenswrapper[4981]: I0227 19:19:05.212652 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 19:19:05 crc kubenswrapper[4981]: I0227 19:19:05.642635 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f1f0012-8326-4378-9870-0da9e2128a42" path="/var/lib/kubelet/pods/4f1f0012-8326-4378-9870-0da9e2128a42/volumes" Feb 27 19:19:05 crc kubenswrapper[4981]: I0227 19:19:05.644182 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="948b2b0a-6ee4-422a-8f8f-ba4271a94c61" path="/var/lib/kubelet/pods/948b2b0a-6ee4-422a-8f8f-ba4271a94c61/volumes" Feb 27 19:19:05 crc kubenswrapper[4981]: I0227 19:19:05.644881 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:19:05 crc kubenswrapper[4981]: I0227 19:19:05.812450 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af9c3f90-c49d-4d3f-9d4a-567f5683434b","Type":"ContainerStarted","Data":"166f3477f6451532c79a3cd47a6cb78dd4806dcbd36450c24fa17f776541affa"} Feb 27 19:19:05 crc kubenswrapper[4981]: I0227 19:19:05.812492 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af9c3f90-c49d-4d3f-9d4a-567f5683434b","Type":"ContainerStarted","Data":"adb8060916a0ec1de7fcdb56cf5058306bfcd73a65a3d647f566d71cc499dd7f"} Feb 27 19:19:05 crc kubenswrapper[4981]: I0227 19:19:05.813900 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"caff730d-9210-4de9-b0f1-997e6f5f16c3","Type":"ContainerStarted","Data":"a00132f0ac6dbee951194bcad710a6371433227c3b0775c31e258a5544d129d7"} Feb 27 19:19:05 crc kubenswrapper[4981]: I0227 19:19:05.825960 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d0dde97-c1b5-4662-91b4-1e38716a6412","Type":"ContainerStarted","Data":"ef5e00afe4841401505d2d44ce843d9dc5952e56d9efb63f89b0c0051bad198e"} Feb 27 19:19:05 crc kubenswrapper[4981]: I0227 19:19:05.839752 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.839734202 podStartE2EDuration="2.839734202s" podCreationTimestamp="2026-02-27 19:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:19:05.838353369 +0000 UTC m=+2045.317134539" watchObservedRunningTime="2026-02-27 19:19:05.839734202 +0000 UTC m=+2045.318515362" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.634668 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.701602 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.722030 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/770989f6-5783-4874-96fd-6fc1a6ea0757-config-data\") pod \"770989f6-5783-4874-96fd-6fc1a6ea0757\" (UID: \"770989f6-5783-4874-96fd-6fc1a6ea0757\") " Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.722282 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770989f6-5783-4874-96fd-6fc1a6ea0757-combined-ca-bundle\") pod \"770989f6-5783-4874-96fd-6fc1a6ea0757\" (UID: \"770989f6-5783-4874-96fd-6fc1a6ea0757\") " Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.722352 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62m66\" (UniqueName: \"kubernetes.io/projected/770989f6-5783-4874-96fd-6fc1a6ea0757-kube-api-access-62m66\") pod \"770989f6-5783-4874-96fd-6fc1a6ea0757\" (UID: \"770989f6-5783-4874-96fd-6fc1a6ea0757\") " Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.727570 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/770989f6-5783-4874-96fd-6fc1a6ea0757-kube-api-access-62m66" (OuterVolumeSpecName: "kube-api-access-62m66") pod "770989f6-5783-4874-96fd-6fc1a6ea0757" (UID: "770989f6-5783-4874-96fd-6fc1a6ea0757"). InnerVolumeSpecName "kube-api-access-62m66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.754244 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/770989f6-5783-4874-96fd-6fc1a6ea0757-config-data" (OuterVolumeSpecName: "config-data") pod "770989f6-5783-4874-96fd-6fc1a6ea0757" (UID: "770989f6-5783-4874-96fd-6fc1a6ea0757"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.768298 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/770989f6-5783-4874-96fd-6fc1a6ea0757-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "770989f6-5783-4874-96fd-6fc1a6ea0757" (UID: "770989f6-5783-4874-96fd-6fc1a6ea0757"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.823753 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-config-data\") pod \"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8\" (UID: \"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8\") " Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.823827 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvkk5\" (UniqueName: \"kubernetes.io/projected/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-kube-api-access-dvkk5\") pod \"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8\" (UID: \"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8\") " Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.823957 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-logs\") pod \"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8\" (UID: \"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8\") " Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.824049 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-combined-ca-bundle\") pod \"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8\" (UID: \"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8\") " Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.824433 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/770989f6-5783-4874-96fd-6fc1a6ea0757-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.824446 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770989f6-5783-4874-96fd-6fc1a6ea0757-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.824458 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62m66\" (UniqueName: \"kubernetes.io/projected/770989f6-5783-4874-96fd-6fc1a6ea0757-kube-api-access-62m66\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.826584 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-logs" (OuterVolumeSpecName: "logs") pod "7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8" (UID: "7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.831565 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-kube-api-access-dvkk5" (OuterVolumeSpecName: "kube-api-access-dvkk5") pod "7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8" (UID: "7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8"). InnerVolumeSpecName "kube-api-access-dvkk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.840855 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af9c3f90-c49d-4d3f-9d4a-567f5683434b","Type":"ContainerStarted","Data":"866b94f0fde1088c3ac2a9d2972b5bb9cc7b5fb09f1b0318f62b52074b51d020"} Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.842884 4981 generic.go:334] "Generic (PLEG): container finished" podID="770989f6-5783-4874-96fd-6fc1a6ea0757" containerID="041e2383b18a3d3d9bc0cb4291a7cc0041aaec7f99f62945dc6f15a8a8d352b1" exitCode=0 Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.842983 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"770989f6-5783-4874-96fd-6fc1a6ea0757","Type":"ContainerDied","Data":"041e2383b18a3d3d9bc0cb4291a7cc0041aaec7f99f62945dc6f15a8a8d352b1"} Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.843016 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"770989f6-5783-4874-96fd-6fc1a6ea0757","Type":"ContainerDied","Data":"ce24d85e01034d6e39f74928a6f1f76c3e28cf36d92d031c5acbc00207b868ca"} Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.843036 4981 scope.go:117] "RemoveContainer" containerID="041e2383b18a3d3d9bc0cb4291a7cc0041aaec7f99f62945dc6f15a8a8d352b1" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.843149 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.845793 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d0dde97-c1b5-4662-91b4-1e38716a6412","Type":"ContainerStarted","Data":"f58018452b6e710864b2e17f386b40e91b49f296d7eac85fbd3845c0a7df17ce"} Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.845823 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d0dde97-c1b5-4662-91b4-1e38716a6412","Type":"ContainerStarted","Data":"a225ab3ce6658f8dd34109d64fde380fbd739d1cd42267cc2cee90abb929549d"} Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.852743 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-config-data" (OuterVolumeSpecName: "config-data") pod "7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8" (UID: "7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.853024 4981 generic.go:334] "Generic (PLEG): container finished" podID="7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8" containerID="015f9b1ec9b883b36119c3474e2c485e3b8c21b974226a67510d92408f36a1ce" exitCode=0 Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.853112 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.854810 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8","Type":"ContainerDied","Data":"015f9b1ec9b883b36119c3474e2c485e3b8c21b974226a67510d92408f36a1ce"} Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.856869 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8","Type":"ContainerDied","Data":"5b547332633a6c9e68d290ac09b2289fe58052dea800bef97c02e046048a0430"} Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.856910 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.869274 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8" (UID: "7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.878598 4981 scope.go:117] "RemoveContainer" containerID="041e2383b18a3d3d9bc0cb4291a7cc0041aaec7f99f62945dc6f15a8a8d352b1" Feb 27 19:19:06 crc kubenswrapper[4981]: E0227 19:19:06.879827 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"041e2383b18a3d3d9bc0cb4291a7cc0041aaec7f99f62945dc6f15a8a8d352b1\": container with ID starting with 041e2383b18a3d3d9bc0cb4291a7cc0041aaec7f99f62945dc6f15a8a8d352b1 not found: ID does not exist" containerID="041e2383b18a3d3d9bc0cb4291a7cc0041aaec7f99f62945dc6f15a8a8d352b1" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.879909 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"041e2383b18a3d3d9bc0cb4291a7cc0041aaec7f99f62945dc6f15a8a8d352b1"} err="failed to get container status \"041e2383b18a3d3d9bc0cb4291a7cc0041aaec7f99f62945dc6f15a8a8d352b1\": rpc error: code = NotFound desc = could not find container \"041e2383b18a3d3d9bc0cb4291a7cc0041aaec7f99f62945dc6f15a8a8d352b1\": container with ID starting with 041e2383b18a3d3d9bc0cb4291a7cc0041aaec7f99f62945dc6f15a8a8d352b1 not found: ID does not exist" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.879944 4981 scope.go:117] "RemoveContainer" containerID="015f9b1ec9b883b36119c3474e2c485e3b8c21b974226a67510d92408f36a1ce" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.884972 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.884952672 podStartE2EDuration="2.884952672s" podCreationTimestamp="2026-02-27 19:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:19:06.880413204 +0000 UTC m=+2046.359194374" watchObservedRunningTime="2026-02-27 19:19:06.884952672 +0000 UTC m=+2046.363733832" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.906408 4981 scope.go:117] "RemoveContainer" containerID="d97f9378864dc341a7cf6aac96c7d09b250d90254a151f0c12b37f2f1de3fe5c" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.922565 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.926088 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.926124 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvkk5\" (UniqueName: \"kubernetes.io/projected/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-kube-api-access-dvkk5\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.926133 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.926142 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.937405 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.953146 4981 scope.go:117] "RemoveContainer" containerID="015f9b1ec9b883b36119c3474e2c485e3b8c21b974226a67510d92408f36a1ce" Feb 27 19:19:06 crc kubenswrapper[4981]: E0227 19:19:06.953609 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"015f9b1ec9b883b36119c3474e2c485e3b8c21b974226a67510d92408f36a1ce\": container with ID starting with 015f9b1ec9b883b36119c3474e2c485e3b8c21b974226a67510d92408f36a1ce not found: ID does not exist" containerID="015f9b1ec9b883b36119c3474e2c485e3b8c21b974226a67510d92408f36a1ce" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.953684 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"015f9b1ec9b883b36119c3474e2c485e3b8c21b974226a67510d92408f36a1ce"} err="failed to get container status \"015f9b1ec9b883b36119c3474e2c485e3b8c21b974226a67510d92408f36a1ce\": rpc error: code = NotFound desc = could not find container \"015f9b1ec9b883b36119c3474e2c485e3b8c21b974226a67510d92408f36a1ce\": container with ID starting with 015f9b1ec9b883b36119c3474e2c485e3b8c21b974226a67510d92408f36a1ce not found: ID does not exist" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.953714 4981 scope.go:117] "RemoveContainer" containerID="d97f9378864dc341a7cf6aac96c7d09b250d90254a151f0c12b37f2f1de3fe5c" Feb 27 19:19:06 crc kubenswrapper[4981]: E0227 19:19:06.954027 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d97f9378864dc341a7cf6aac96c7d09b250d90254a151f0c12b37f2f1de3fe5c\": container with ID starting with d97f9378864dc341a7cf6aac96c7d09b250d90254a151f0c12b37f2f1de3fe5c not found: ID does not exist" containerID="d97f9378864dc341a7cf6aac96c7d09b250d90254a151f0c12b37f2f1de3fe5c" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.954073 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d97f9378864dc341a7cf6aac96c7d09b250d90254a151f0c12b37f2f1de3fe5c"} err="failed to get container status \"d97f9378864dc341a7cf6aac96c7d09b250d90254a151f0c12b37f2f1de3fe5c\": rpc error: code = NotFound desc = could not find container \"d97f9378864dc341a7cf6aac96c7d09b250d90254a151f0c12b37f2f1de3fe5c\": container with ID starting with d97f9378864dc341a7cf6aac96c7d09b250d90254a151f0c12b37f2f1de3fe5c not found: ID does not exist" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.958560 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 19:19:06 crc kubenswrapper[4981]: E0227 19:19:06.959137 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8" containerName="nova-api-log" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.959162 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8" containerName="nova-api-log" Feb 27 19:19:06 crc kubenswrapper[4981]: E0227 19:19:06.959183 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8" containerName="nova-api-api" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.959191 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8" containerName="nova-api-api" Feb 27 19:19:06 crc kubenswrapper[4981]: E0227 19:19:06.959208 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770989f6-5783-4874-96fd-6fc1a6ea0757" containerName="nova-scheduler-scheduler" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.959217 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="770989f6-5783-4874-96fd-6fc1a6ea0757" containerName="nova-scheduler-scheduler" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.959445 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8" containerName="nova-api-log" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.959476 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="770989f6-5783-4874-96fd-6fc1a6ea0757" containerName="nova-scheduler-scheduler" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.959496 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8" containerName="nova-api-api" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.960331 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.969524 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 27 19:19:06 crc kubenswrapper[4981]: I0227 19:19:06.982889 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.027591 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq25q\" (UniqueName: \"kubernetes.io/projected/47c21807-0372-41ce-a60d-021a45429037-kube-api-access-dq25q\") pod \"nova-scheduler-0\" (UID: \"47c21807-0372-41ce-a60d-021a45429037\") " pod="openstack/nova-scheduler-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.027898 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47c21807-0372-41ce-a60d-021a45429037-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"47c21807-0372-41ce-a60d-021a45429037\") " pod="openstack/nova-scheduler-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.028268 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47c21807-0372-41ce-a60d-021a45429037-config-data\") pod \"nova-scheduler-0\" (UID: \"47c21807-0372-41ce-a60d-021a45429037\") " pod="openstack/nova-scheduler-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.129737 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47c21807-0372-41ce-a60d-021a45429037-config-data\") pod \"nova-scheduler-0\" (UID: \"47c21807-0372-41ce-a60d-021a45429037\") " pod="openstack/nova-scheduler-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.129832 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq25q\" (UniqueName: \"kubernetes.io/projected/47c21807-0372-41ce-a60d-021a45429037-kube-api-access-dq25q\") pod \"nova-scheduler-0\" (UID: \"47c21807-0372-41ce-a60d-021a45429037\") " pod="openstack/nova-scheduler-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.129906 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47c21807-0372-41ce-a60d-021a45429037-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"47c21807-0372-41ce-a60d-021a45429037\") " pod="openstack/nova-scheduler-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.133886 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47c21807-0372-41ce-a60d-021a45429037-config-data\") pod \"nova-scheduler-0\" (UID: \"47c21807-0372-41ce-a60d-021a45429037\") " pod="openstack/nova-scheduler-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.134610 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47c21807-0372-41ce-a60d-021a45429037-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"47c21807-0372-41ce-a60d-021a45429037\") " pod="openstack/nova-scheduler-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.172130 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq25q\" (UniqueName: \"kubernetes.io/projected/47c21807-0372-41ce-a60d-021a45429037-kube-api-access-dq25q\") pod \"nova-scheduler-0\" (UID: \"47c21807-0372-41ce-a60d-021a45429037\") " pod="openstack/nova-scheduler-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.327313 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.334090 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.365419 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.367424 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.368431 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.381573 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.390799 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.443353 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd926f78-827a-4b07-9f2e-6e5cc597503b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd926f78-827a-4b07-9f2e-6e5cc597503b\") " pod="openstack/nova-api-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.443704 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd926f78-827a-4b07-9f2e-6e5cc597503b-logs\") pod \"nova-api-0\" (UID: \"fd926f78-827a-4b07-9f2e-6e5cc597503b\") " pod="openstack/nova-api-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.443804 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd926f78-827a-4b07-9f2e-6e5cc597503b-config-data\") pod \"nova-api-0\" (UID: \"fd926f78-827a-4b07-9f2e-6e5cc597503b\") " pod="openstack/nova-api-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.443857 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x77np\" (UniqueName: \"kubernetes.io/projected/fd926f78-827a-4b07-9f2e-6e5cc597503b-kube-api-access-x77np\") pod \"nova-api-0\" (UID: \"fd926f78-827a-4b07-9f2e-6e5cc597503b\") " pod="openstack/nova-api-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.548302 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x77np\" (UniqueName: \"kubernetes.io/projected/fd926f78-827a-4b07-9f2e-6e5cc597503b-kube-api-access-x77np\") pod \"nova-api-0\" (UID: \"fd926f78-827a-4b07-9f2e-6e5cc597503b\") " pod="openstack/nova-api-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.549154 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd926f78-827a-4b07-9f2e-6e5cc597503b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd926f78-827a-4b07-9f2e-6e5cc597503b\") " pod="openstack/nova-api-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.549258 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd926f78-827a-4b07-9f2e-6e5cc597503b-logs\") pod \"nova-api-0\" (UID: \"fd926f78-827a-4b07-9f2e-6e5cc597503b\") " pod="openstack/nova-api-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.549491 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd926f78-827a-4b07-9f2e-6e5cc597503b-config-data\") pod \"nova-api-0\" (UID: \"fd926f78-827a-4b07-9f2e-6e5cc597503b\") " pod="openstack/nova-api-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.549715 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd926f78-827a-4b07-9f2e-6e5cc597503b-logs\") pod \"nova-api-0\" (UID: \"fd926f78-827a-4b07-9f2e-6e5cc597503b\") " pod="openstack/nova-api-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.557973 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd926f78-827a-4b07-9f2e-6e5cc597503b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd926f78-827a-4b07-9f2e-6e5cc597503b\") " pod="openstack/nova-api-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.558387 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd926f78-827a-4b07-9f2e-6e5cc597503b-config-data\") pod \"nova-api-0\" (UID: \"fd926f78-827a-4b07-9f2e-6e5cc597503b\") " pod="openstack/nova-api-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.567694 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x77np\" (UniqueName: \"kubernetes.io/projected/fd926f78-827a-4b07-9f2e-6e5cc597503b-kube-api-access-x77np\") pod \"nova-api-0\" (UID: \"fd926f78-827a-4b07-9f2e-6e5cc597503b\") " pod="openstack/nova-api-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.648741 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="770989f6-5783-4874-96fd-6fc1a6ea0757" path="/var/lib/kubelet/pods/770989f6-5783-4874-96fd-6fc1a6ea0757/volumes" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.649933 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8" path="/var/lib/kubelet/pods/7bfee6e4-4ff1-42fb-b40d-9d4f6d046dd8/volumes" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.712503 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.895245 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.906313 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d0dde97-c1b5-4662-91b4-1e38716a6412","Type":"ContainerStarted","Data":"5aecbc124e1cac3b4fc4c7f18fe0cea4f4e75ce645fa5494353d953050a5d203"} Feb 27 19:19:07 crc kubenswrapper[4981]: I0227 19:19:07.972485 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 19:19:08 crc kubenswrapper[4981]: I0227 19:19:08.916411 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"47c21807-0372-41ce-a60d-021a45429037","Type":"ContainerStarted","Data":"519d973b88c6da9ab8e2526937da89c47d2d8f93fded9a976c2ef5e0a0606a4c"} Feb 27 19:19:08 crc kubenswrapper[4981]: I0227 19:19:08.916860 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"47c21807-0372-41ce-a60d-021a45429037","Type":"ContainerStarted","Data":"6f6c388e80a419e4222cbaabcaaf794744bfd1eb56f347c2b5114525851d67d5"} Feb 27 19:19:08 crc kubenswrapper[4981]: I0227 19:19:08.917501 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 27 19:19:08 crc kubenswrapper[4981]: I0227 19:19:08.919466 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd926f78-827a-4b07-9f2e-6e5cc597503b","Type":"ContainerStarted","Data":"f3044abc7508bfb6556f45d024f34839b9771d12f1a7bfc877b55d882bcef680"} Feb 27 19:19:08 crc kubenswrapper[4981]: I0227 19:19:08.919496 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd926f78-827a-4b07-9f2e-6e5cc597503b","Type":"ContainerStarted","Data":"4b50cc042b7d09d497be5894e1f900b6ab7ff7355a1e2a418b0d54026934f618"} Feb 27 19:19:08 crc kubenswrapper[4981]: I0227 19:19:08.919506 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd926f78-827a-4b07-9f2e-6e5cc597503b","Type":"ContainerStarted","Data":"c2f99eff485df665f94f3e4a380b09686d9a1c04b988b76f49964cc62be3076e"} Feb 27 19:19:08 crc kubenswrapper[4981]: I0227 19:19:08.950231 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.950212973 podStartE2EDuration="2.950212973s" podCreationTimestamp="2026-02-27 19:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:19:08.937567304 +0000 UTC m=+2048.416348464" watchObservedRunningTime="2026-02-27 19:19:08.950212973 +0000 UTC m=+2048.428994133" Feb 27 19:19:08 crc kubenswrapper[4981]: I0227 19:19:08.989990 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.989938821 podStartE2EDuration="1.989938821s" podCreationTimestamp="2026-02-27 19:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:19:08.977322514 +0000 UTC m=+2048.456103694" watchObservedRunningTime="2026-02-27 19:19:08.989938821 +0000 UTC m=+2048.468719991" Feb 27 19:19:09 crc kubenswrapper[4981]: I0227 19:19:09.184933 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 27 19:19:10 crc kubenswrapper[4981]: I0227 19:19:10.212923 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 19:19:10 crc kubenswrapper[4981]: I0227 19:19:10.213539 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 19:19:10 crc kubenswrapper[4981]: I0227 19:19:10.941140 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d0dde97-c1b5-4662-91b4-1e38716a6412","Type":"ContainerStarted","Data":"8004d45b1162d9ac83ee63f063165452a0370c2bc04390acf4a21e2c0223ca31"} Feb 27 19:19:10 crc kubenswrapper[4981]: I0227 19:19:10.941532 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 19:19:10 crc kubenswrapper[4981]: I0227 19:19:10.974457 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.399784701 podStartE2EDuration="6.974433743s" podCreationTimestamp="2026-02-27 19:19:04 +0000 UTC" firstStartedPulling="2026-02-27 19:19:05.102430986 +0000 UTC m=+2044.581212146" lastFinishedPulling="2026-02-27 19:19:09.677080028 +0000 UTC m=+2049.155861188" observedRunningTime="2026-02-27 19:19:10.964460227 +0000 UTC m=+2050.443241387" watchObservedRunningTime="2026-02-27 19:19:10.974433743 +0000 UTC m=+2050.453214903" Feb 27 19:19:12 crc kubenswrapper[4981]: I0227 19:19:12.369980 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 27 19:19:15 crc kubenswrapper[4981]: I0227 19:19:15.213237 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 19:19:15 crc kubenswrapper[4981]: I0227 19:19:15.213278 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 19:19:16 crc kubenswrapper[4981]: I0227 19:19:16.228230 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="af9c3f90-c49d-4d3f-9d4a-567f5683434b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 19:19:16 crc kubenswrapper[4981]: I0227 19:19:16.228239 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="af9c3f90-c49d-4d3f-9d4a-567f5683434b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 19:19:17 crc kubenswrapper[4981]: I0227 19:19:17.369733 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 27 19:19:17 crc kubenswrapper[4981]: I0227 19:19:17.401682 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 27 19:19:17 crc kubenswrapper[4981]: I0227 19:19:17.713831 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 19:19:17 crc kubenswrapper[4981]: I0227 19:19:17.714306 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 19:19:18 crc kubenswrapper[4981]: I0227 19:19:18.063630 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 27 19:19:18 crc kubenswrapper[4981]: I0227 19:19:18.797312 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fd926f78-827a-4b07-9f2e-6e5cc597503b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 19:19:18 crc kubenswrapper[4981]: I0227 19:19:18.797547 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fd926f78-827a-4b07-9f2e-6e5cc597503b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 19:19:19 crc kubenswrapper[4981]: I0227 19:19:19.850864 4981 scope.go:117] "RemoveContainer" containerID="61a369867fdb53ff481bf9597a82f22b9db98dc118f4b7dac2daa323d445e360" Feb 27 19:19:24 crc kubenswrapper[4981]: I0227 19:19:24.792812 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:24 crc kubenswrapper[4981]: I0227 19:19:24.845176 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/292840f2-f0b1-4bcd-9787-225d6c1a3e51-combined-ca-bundle\") pod \"292840f2-f0b1-4bcd-9787-225d6c1a3e51\" (UID: \"292840f2-f0b1-4bcd-9787-225d6c1a3e51\") " Feb 27 19:19:24 crc kubenswrapper[4981]: I0227 19:19:24.845294 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbv7z\" (UniqueName: \"kubernetes.io/projected/292840f2-f0b1-4bcd-9787-225d6c1a3e51-kube-api-access-qbv7z\") pod \"292840f2-f0b1-4bcd-9787-225d6c1a3e51\" (UID: \"292840f2-f0b1-4bcd-9787-225d6c1a3e51\") " Feb 27 19:19:24 crc kubenswrapper[4981]: I0227 19:19:24.845410 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/292840f2-f0b1-4bcd-9787-225d6c1a3e51-config-data\") pod \"292840f2-f0b1-4bcd-9787-225d6c1a3e51\" (UID: \"292840f2-f0b1-4bcd-9787-225d6c1a3e51\") " Feb 27 19:19:24 crc kubenswrapper[4981]: I0227 19:19:24.850900 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/292840f2-f0b1-4bcd-9787-225d6c1a3e51-kube-api-access-qbv7z" (OuterVolumeSpecName: "kube-api-access-qbv7z") pod "292840f2-f0b1-4bcd-9787-225d6c1a3e51" (UID: "292840f2-f0b1-4bcd-9787-225d6c1a3e51"). InnerVolumeSpecName "kube-api-access-qbv7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:19:24 crc kubenswrapper[4981]: I0227 19:19:24.871783 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/292840f2-f0b1-4bcd-9787-225d6c1a3e51-config-data" (OuterVolumeSpecName: "config-data") pod "292840f2-f0b1-4bcd-9787-225d6c1a3e51" (UID: "292840f2-f0b1-4bcd-9787-225d6c1a3e51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:24 crc kubenswrapper[4981]: I0227 19:19:24.876102 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/292840f2-f0b1-4bcd-9787-225d6c1a3e51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "292840f2-f0b1-4bcd-9787-225d6c1a3e51" (UID: "292840f2-f0b1-4bcd-9787-225d6c1a3e51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:24 crc kubenswrapper[4981]: I0227 19:19:24.949204 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/292840f2-f0b1-4bcd-9787-225d6c1a3e51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:24 crc kubenswrapper[4981]: I0227 19:19:24.949253 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbv7z\" (UniqueName: \"kubernetes.io/projected/292840f2-f0b1-4bcd-9787-225d6c1a3e51-kube-api-access-qbv7z\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:24 crc kubenswrapper[4981]: I0227 19:19:24.949271 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/292840f2-f0b1-4bcd-9787-225d6c1a3e51-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.117677 4981 generic.go:334] "Generic (PLEG): container finished" podID="292840f2-f0b1-4bcd-9787-225d6c1a3e51" containerID="d4c8b5d08d228e7a4379977b7273e89ea61c13394c3789bba9da06a9a55ff18e" exitCode=137 Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.117755 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.117760 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"292840f2-f0b1-4bcd-9787-225d6c1a3e51","Type":"ContainerDied","Data":"d4c8b5d08d228e7a4379977b7273e89ea61c13394c3789bba9da06a9a55ff18e"} Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.118341 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"292840f2-f0b1-4bcd-9787-225d6c1a3e51","Type":"ContainerDied","Data":"2bef6d1e4227fa1dc177c4db85247d2daf0e2083e8102af7ac03a9e51fa051e8"} Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.118377 4981 scope.go:117] "RemoveContainer" containerID="d4c8b5d08d228e7a4379977b7273e89ea61c13394c3789bba9da06a9a55ff18e" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.152355 4981 scope.go:117] "RemoveContainer" containerID="d4c8b5d08d228e7a4379977b7273e89ea61c13394c3789bba9da06a9a55ff18e" Feb 27 19:19:25 crc kubenswrapper[4981]: E0227 19:19:25.152735 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4c8b5d08d228e7a4379977b7273e89ea61c13394c3789bba9da06a9a55ff18e\": container with ID starting with d4c8b5d08d228e7a4379977b7273e89ea61c13394c3789bba9da06a9a55ff18e not found: ID does not exist" containerID="d4c8b5d08d228e7a4379977b7273e89ea61c13394c3789bba9da06a9a55ff18e" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.152772 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4c8b5d08d228e7a4379977b7273e89ea61c13394c3789bba9da06a9a55ff18e"} err="failed to get container status \"d4c8b5d08d228e7a4379977b7273e89ea61c13394c3789bba9da06a9a55ff18e\": rpc error: code = NotFound desc = could not find container \"d4c8b5d08d228e7a4379977b7273e89ea61c13394c3789bba9da06a9a55ff18e\": container with ID starting with d4c8b5d08d228e7a4379977b7273e89ea61c13394c3789bba9da06a9a55ff18e not found: ID does not exist" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.178770 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.196089 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.204860 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 19:19:25 crc kubenswrapper[4981]: E0227 19:19:25.205478 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="292840f2-f0b1-4bcd-9787-225d6c1a3e51" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.205578 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="292840f2-f0b1-4bcd-9787-225d6c1a3e51" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.205825 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="292840f2-f0b1-4bcd-9787-225d6c1a3e51" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.206590 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.210651 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.210792 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.211015 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.218616 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.223513 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.224462 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.229044 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.255622 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.255737 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzpvk\" (UniqueName: \"kubernetes.io/projected/4e27e8aa-f220-4415-8670-ca9186161dba-kube-api-access-fzpvk\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.255829 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.255907 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.256022 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.358479 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.358562 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.358638 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.358674 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.358714 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzpvk\" (UniqueName: \"kubernetes.io/projected/4e27e8aa-f220-4415-8670-ca9186161dba-kube-api-access-fzpvk\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.362575 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.362614 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.363050 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.363182 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.374256 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzpvk\" (UniqueName: \"kubernetes.io/projected/4e27e8aa-f220-4415-8670-ca9186161dba-kube-api-access-fzpvk\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.530964 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.647309 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="292840f2-f0b1-4bcd-9787-225d6c1a3e51" path="/var/lib/kubelet/pods/292840f2-f0b1-4bcd-9787-225d6c1a3e51/volumes" Feb 27 19:19:25 crc kubenswrapper[4981]: I0227 19:19:25.980682 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 19:19:26 crc kubenswrapper[4981]: I0227 19:19:26.130252 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4e27e8aa-f220-4415-8670-ca9186161dba","Type":"ContainerStarted","Data":"313deef2c5a8e6f01649d281da41096877fef8abf38e0f04b1d9f6318b78f7f9"} Feb 27 19:19:26 crc kubenswrapper[4981]: I0227 19:19:26.138721 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 19:19:27 crc kubenswrapper[4981]: I0227 19:19:27.142252 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4e27e8aa-f220-4415-8670-ca9186161dba","Type":"ContainerStarted","Data":"6c1a95f2a0729962517d2e152f52fd832e734caecb67ecd85b30fb4674656560"} Feb 27 19:19:27 crc kubenswrapper[4981]: I0227 19:19:27.160770 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.160749808 podStartE2EDuration="2.160749808s" podCreationTimestamp="2026-02-27 19:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:19:27.159284274 +0000 UTC m=+2066.638065434" watchObservedRunningTime="2026-02-27 19:19:27.160749808 +0000 UTC m=+2066.639530968" Feb 27 19:19:27 crc kubenswrapper[4981]: I0227 19:19:27.718110 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 19:19:27 crc kubenswrapper[4981]: I0227 19:19:27.718655 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 19:19:27 crc kubenswrapper[4981]: I0227 19:19:27.719843 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 19:19:27 crc kubenswrapper[4981]: I0227 19:19:27.721375 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.153547 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.156673 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.379023 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-r2zw4"] Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.381128 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.399896 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-r2zw4"] Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.414746 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-r2zw4\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.414891 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-r2zw4\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.414990 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-r2zw4\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.415251 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-config\") pod \"dnsmasq-dns-89c5cd4d5-r2zw4\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.415330 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vbxt\" (UniqueName: \"kubernetes.io/projected/e719b057-15c7-4204-9cbc-665f6653011f-kube-api-access-7vbxt\") pod \"dnsmasq-dns-89c5cd4d5-r2zw4\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.415539 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-r2zw4\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.517367 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-r2zw4\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.517419 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-r2zw4\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.517504 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-config\") pod \"dnsmasq-dns-89c5cd4d5-r2zw4\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.517542 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vbxt\" (UniqueName: \"kubernetes.io/projected/e719b057-15c7-4204-9cbc-665f6653011f-kube-api-access-7vbxt\") pod \"dnsmasq-dns-89c5cd4d5-r2zw4\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.517615 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-r2zw4\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.517677 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-r2zw4\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.518934 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-r2zw4\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.518934 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-config\") pod \"dnsmasq-dns-89c5cd4d5-r2zw4\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.518973 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-r2zw4\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.518982 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-r2zw4\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.519268 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-r2zw4\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.537468 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vbxt\" (UniqueName: \"kubernetes.io/projected/e719b057-15c7-4204-9cbc-665f6653011f-kube-api-access-7vbxt\") pod \"dnsmasq-dns-89c5cd4d5-r2zw4\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:28 crc kubenswrapper[4981]: I0227 19:19:28.708220 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:29 crc kubenswrapper[4981]: I0227 19:19:29.213574 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-r2zw4"] Feb 27 19:19:30 crc kubenswrapper[4981]: I0227 19:19:30.171514 4981 generic.go:334] "Generic (PLEG): container finished" podID="e719b057-15c7-4204-9cbc-665f6653011f" containerID="4c58a0a538d2f90812bdf3348eeccd9e9b536d604d550f56ce5e709e4e2e2a00" exitCode=0 Feb 27 19:19:30 crc kubenswrapper[4981]: I0227 19:19:30.171670 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" event={"ID":"e719b057-15c7-4204-9cbc-665f6653011f","Type":"ContainerDied","Data":"4c58a0a538d2f90812bdf3348eeccd9e9b536d604d550f56ce5e709e4e2e2a00"} Feb 27 19:19:30 crc kubenswrapper[4981]: I0227 19:19:30.172244 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" event={"ID":"e719b057-15c7-4204-9cbc-665f6653011f","Type":"ContainerStarted","Data":"568fd33f85655df09f1ea4a11de44487aba76a40653306981d811cec64ccaf45"} Feb 27 19:19:30 crc kubenswrapper[4981]: I0227 19:19:30.531696 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:30 crc kubenswrapper[4981]: I0227 19:19:30.922079 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 19:19:31 crc kubenswrapper[4981]: I0227 19:19:31.185206 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" event={"ID":"e719b057-15c7-4204-9cbc-665f6653011f","Type":"ContainerStarted","Data":"98584a232e3fef55da5240ff567aead3a2ca1595c80c0f7568768a774b5bbf94"} Feb 27 19:19:31 crc kubenswrapper[4981]: I0227 19:19:31.185323 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fd926f78-827a-4b07-9f2e-6e5cc597503b" containerName="nova-api-log" containerID="cri-o://4b50cc042b7d09d497be5894e1f900b6ab7ff7355a1e2a418b0d54026934f618" gracePeriod=30 Feb 27 19:19:31 crc kubenswrapper[4981]: I0227 19:19:31.185366 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fd926f78-827a-4b07-9f2e-6e5cc597503b" containerName="nova-api-api" containerID="cri-o://f3044abc7508bfb6556f45d024f34839b9771d12f1a7bfc877b55d882bcef680" gracePeriod=30 Feb 27 19:19:31 crc kubenswrapper[4981]: I0227 19:19:31.218393 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" podStartSLOduration=3.218371131 podStartE2EDuration="3.218371131s" podCreationTimestamp="2026-02-27 19:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:19:31.216520604 +0000 UTC m=+2070.695301774" watchObservedRunningTime="2026-02-27 19:19:31.218371131 +0000 UTC m=+2070.697152301" Feb 27 19:19:31 crc kubenswrapper[4981]: I0227 19:19:31.944935 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:19:31 crc kubenswrapper[4981]: I0227 19:19:31.945680 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerName="ceilometer-central-agent" containerID="cri-o://a225ab3ce6658f8dd34109d64fde380fbd739d1cd42267cc2cee90abb929549d" gracePeriod=30 Feb 27 19:19:31 crc kubenswrapper[4981]: I0227 19:19:31.945775 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerName="sg-core" containerID="cri-o://5aecbc124e1cac3b4fc4c7f18fe0cea4f4e75ce645fa5494353d953050a5d203" gracePeriod=30 Feb 27 19:19:31 crc kubenswrapper[4981]: I0227 19:19:31.945816 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerName="ceilometer-notification-agent" containerID="cri-o://f58018452b6e710864b2e17f386b40e91b49f296d7eac85fbd3845c0a7df17ce" gracePeriod=30 Feb 27 19:19:31 crc kubenswrapper[4981]: I0227 19:19:31.945932 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerName="proxy-httpd" containerID="cri-o://8004d45b1162d9ac83ee63f063165452a0370c2bc04390acf4a21e2c0223ca31" gracePeriod=30 Feb 27 19:19:31 crc kubenswrapper[4981]: I0227 19:19:31.960995 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.213:3000/\": read tcp 10.217.0.2:60854->10.217.0.213:3000: read: connection reset by peer" Feb 27 19:19:32 crc kubenswrapper[4981]: I0227 19:19:32.197919 4981 generic.go:334] "Generic (PLEG): container finished" podID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerID="8004d45b1162d9ac83ee63f063165452a0370c2bc04390acf4a21e2c0223ca31" exitCode=0 Feb 27 19:19:32 crc kubenswrapper[4981]: I0227 19:19:32.197956 4981 generic.go:334] "Generic (PLEG): container finished" podID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerID="5aecbc124e1cac3b4fc4c7f18fe0cea4f4e75ce645fa5494353d953050a5d203" exitCode=2 Feb 27 19:19:32 crc kubenswrapper[4981]: I0227 19:19:32.198077 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d0dde97-c1b5-4662-91b4-1e38716a6412","Type":"ContainerDied","Data":"8004d45b1162d9ac83ee63f063165452a0370c2bc04390acf4a21e2c0223ca31"} Feb 27 19:19:32 crc kubenswrapper[4981]: I0227 19:19:32.198117 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d0dde97-c1b5-4662-91b4-1e38716a6412","Type":"ContainerDied","Data":"5aecbc124e1cac3b4fc4c7f18fe0cea4f4e75ce645fa5494353d953050a5d203"} Feb 27 19:19:32 crc kubenswrapper[4981]: I0227 19:19:32.200719 4981 generic.go:334] "Generic (PLEG): container finished" podID="fd926f78-827a-4b07-9f2e-6e5cc597503b" containerID="4b50cc042b7d09d497be5894e1f900b6ab7ff7355a1e2a418b0d54026934f618" exitCode=143 Feb 27 19:19:32 crc kubenswrapper[4981]: I0227 19:19:32.201310 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd926f78-827a-4b07-9f2e-6e5cc597503b","Type":"ContainerDied","Data":"4b50cc042b7d09d497be5894e1f900b6ab7ff7355a1e2a418b0d54026934f618"} Feb 27 19:19:32 crc kubenswrapper[4981]: I0227 19:19:32.201336 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.213239 4981 generic.go:334] "Generic (PLEG): container finished" podID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerID="f58018452b6e710864b2e17f386b40e91b49f296d7eac85fbd3845c0a7df17ce" exitCode=0 Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.213510 4981 generic.go:334] "Generic (PLEG): container finished" podID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerID="a225ab3ce6658f8dd34109d64fde380fbd739d1cd42267cc2cee90abb929549d" exitCode=0 Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.213386 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d0dde97-c1b5-4662-91b4-1e38716a6412","Type":"ContainerDied","Data":"f58018452b6e710864b2e17f386b40e91b49f296d7eac85fbd3845c0a7df17ce"} Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.213804 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d0dde97-c1b5-4662-91b4-1e38716a6412","Type":"ContainerDied","Data":"a225ab3ce6658f8dd34109d64fde380fbd739d1cd42267cc2cee90abb929549d"} Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.315508 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.421749 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-config-data\") pod \"2d0dde97-c1b5-4662-91b4-1e38716a6412\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.422561 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d0dde97-c1b5-4662-91b4-1e38716a6412-log-httpd\") pod \"2d0dde97-c1b5-4662-91b4-1e38716a6412\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.422610 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d0dde97-c1b5-4662-91b4-1e38716a6412-run-httpd\") pod \"2d0dde97-c1b5-4662-91b4-1e38716a6412\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.422646 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-scripts\") pod \"2d0dde97-c1b5-4662-91b4-1e38716a6412\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.422691 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-ceilometer-tls-certs\") pod \"2d0dde97-c1b5-4662-91b4-1e38716a6412\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.422719 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-sg-core-conf-yaml\") pod \"2d0dde97-c1b5-4662-91b4-1e38716a6412\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.422749 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-combined-ca-bundle\") pod \"2d0dde97-c1b5-4662-91b4-1e38716a6412\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.422781 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqpwx\" (UniqueName: \"kubernetes.io/projected/2d0dde97-c1b5-4662-91b4-1e38716a6412-kube-api-access-wqpwx\") pod \"2d0dde97-c1b5-4662-91b4-1e38716a6412\" (UID: \"2d0dde97-c1b5-4662-91b4-1e38716a6412\") " Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.423048 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d0dde97-c1b5-4662-91b4-1e38716a6412-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2d0dde97-c1b5-4662-91b4-1e38716a6412" (UID: "2d0dde97-c1b5-4662-91b4-1e38716a6412"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.423173 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d0dde97-c1b5-4662-91b4-1e38716a6412-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2d0dde97-c1b5-4662-91b4-1e38716a6412" (UID: "2d0dde97-c1b5-4662-91b4-1e38716a6412"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.428625 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d0dde97-c1b5-4662-91b4-1e38716a6412-kube-api-access-wqpwx" (OuterVolumeSpecName: "kube-api-access-wqpwx") pod "2d0dde97-c1b5-4662-91b4-1e38716a6412" (UID: "2d0dde97-c1b5-4662-91b4-1e38716a6412"). InnerVolumeSpecName "kube-api-access-wqpwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.432255 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-scripts" (OuterVolumeSpecName: "scripts") pod "2d0dde97-c1b5-4662-91b4-1e38716a6412" (UID: "2d0dde97-c1b5-4662-91b4-1e38716a6412"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.448114 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2d0dde97-c1b5-4662-91b4-1e38716a6412" (UID: "2d0dde97-c1b5-4662-91b4-1e38716a6412"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.494722 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2d0dde97-c1b5-4662-91b4-1e38716a6412" (UID: "2d0dde97-c1b5-4662-91b4-1e38716a6412"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.502146 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d0dde97-c1b5-4662-91b4-1e38716a6412" (UID: "2d0dde97-c1b5-4662-91b4-1e38716a6412"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.524458 4981 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d0dde97-c1b5-4662-91b4-1e38716a6412-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.524494 4981 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d0dde97-c1b5-4662-91b4-1e38716a6412-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.524504 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.524512 4981 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.524528 4981 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.524536 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.524547 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqpwx\" (UniqueName: \"kubernetes.io/projected/2d0dde97-c1b5-4662-91b4-1e38716a6412-kube-api-access-wqpwx\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.537197 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-config-data" (OuterVolumeSpecName: "config-data") pod "2d0dde97-c1b5-4662-91b4-1e38716a6412" (UID: "2d0dde97-c1b5-4662-91b4-1e38716a6412"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:33 crc kubenswrapper[4981]: I0227 19:19:33.626266 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d0dde97-c1b5-4662-91b4-1e38716a6412-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.223521 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d0dde97-c1b5-4662-91b4-1e38716a6412","Type":"ContainerDied","Data":"ef5e00afe4841401505d2d44ce843d9dc5952e56d9efb63f89b0c0051bad198e"} Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.223614 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.223861 4981 scope.go:117] "RemoveContainer" containerID="8004d45b1162d9ac83ee63f063165452a0370c2bc04390acf4a21e2c0223ca31" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.253363 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.263023 4981 scope.go:117] "RemoveContainer" containerID="5aecbc124e1cac3b4fc4c7f18fe0cea4f4e75ce645fa5494353d953050a5d203" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.270359 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.295811 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:19:34 crc kubenswrapper[4981]: E0227 19:19:34.296674 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerName="ceilometer-central-agent" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.296702 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerName="ceilometer-central-agent" Feb 27 19:19:34 crc kubenswrapper[4981]: E0227 19:19:34.296744 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerName="sg-core" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.296752 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerName="sg-core" Feb 27 19:19:34 crc kubenswrapper[4981]: E0227 19:19:34.296778 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerName="proxy-httpd" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.296788 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerName="proxy-httpd" Feb 27 19:19:34 crc kubenswrapper[4981]: E0227 19:19:34.296817 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerName="ceilometer-notification-agent" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.296825 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerName="ceilometer-notification-agent" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.297411 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerName="ceilometer-central-agent" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.297434 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerName="proxy-httpd" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.301157 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerName="sg-core" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.301213 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d0dde97-c1b5-4662-91b4-1e38716a6412" containerName="ceilometer-notification-agent" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.305872 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.307069 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.312176 4981 scope.go:117] "RemoveContainer" containerID="f58018452b6e710864b2e17f386b40e91b49f296d7eac85fbd3845c0a7df17ce" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.314496 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.316797 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.316942 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.353713 4981 scope.go:117] "RemoveContainer" containerID="a225ab3ce6658f8dd34109d64fde380fbd739d1cd42267cc2cee90abb929549d" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.446538 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.446613 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.446682 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tvfk\" (UniqueName: \"kubernetes.io/projected/0d200585-c61d-43f8-a17e-54f695df7dbe-kube-api-access-8tvfk\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.446766 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d200585-c61d-43f8-a17e-54f695df7dbe-run-httpd\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.446926 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.446951 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-scripts\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.446975 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-config-data\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.447005 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d200585-c61d-43f8-a17e-54f695df7dbe-log-httpd\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.550071 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-scripts\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.550128 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-config-data\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.550161 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.550195 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d200585-c61d-43f8-a17e-54f695df7dbe-log-httpd\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.551456 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.551485 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d200585-c61d-43f8-a17e-54f695df7dbe-log-httpd\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.551511 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.551623 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tvfk\" (UniqueName: \"kubernetes.io/projected/0d200585-c61d-43f8-a17e-54f695df7dbe-kube-api-access-8tvfk\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.551827 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d200585-c61d-43f8-a17e-54f695df7dbe-run-httpd\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.552274 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d200585-c61d-43f8-a17e-54f695df7dbe-run-httpd\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.555177 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-scripts\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.556398 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.556469 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.557970 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.559620 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-config-data\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.570158 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tvfk\" (UniqueName: \"kubernetes.io/projected/0d200585-c61d-43f8-a17e-54f695df7dbe-kube-api-access-8tvfk\") pod \"ceilometer-0\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " pod="openstack/ceilometer-0" Feb 27 19:19:34 crc kubenswrapper[4981]: I0227 19:19:34.642726 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:19:35 crc kubenswrapper[4981]: W0227 19:19:35.152746 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d200585_c61d_43f8_a17e_54f695df7dbe.slice/crio-d1671f0d999dc4507c5739146ed8be565b53f6d68cea6c8a8ad6b8115ca05fbd WatchSource:0}: Error finding container d1671f0d999dc4507c5739146ed8be565b53f6d68cea6c8a8ad6b8115ca05fbd: Status 404 returned error can't find the container with id d1671f0d999dc4507c5739146ed8be565b53f6d68cea6c8a8ad6b8115ca05fbd Feb 27 19:19:35 crc kubenswrapper[4981]: I0227 19:19:35.171291 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:19:35 crc kubenswrapper[4981]: I0227 19:19:35.238315 4981 generic.go:334] "Generic (PLEG): container finished" podID="fd926f78-827a-4b07-9f2e-6e5cc597503b" containerID="f3044abc7508bfb6556f45d024f34839b9771d12f1a7bfc877b55d882bcef680" exitCode=0 Feb 27 19:19:35 crc kubenswrapper[4981]: I0227 19:19:35.238412 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd926f78-827a-4b07-9f2e-6e5cc597503b","Type":"ContainerDied","Data":"f3044abc7508bfb6556f45d024f34839b9771d12f1a7bfc877b55d882bcef680"} Feb 27 19:19:35 crc kubenswrapper[4981]: I0227 19:19:35.240418 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d200585-c61d-43f8-a17e-54f695df7dbe","Type":"ContainerStarted","Data":"d1671f0d999dc4507c5739146ed8be565b53f6d68cea6c8a8ad6b8115ca05fbd"} Feb 27 19:19:35 crc kubenswrapper[4981]: I0227 19:19:35.315881 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 19:19:35 crc kubenswrapper[4981]: I0227 19:19:35.370467 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd926f78-827a-4b07-9f2e-6e5cc597503b-logs\") pod \"fd926f78-827a-4b07-9f2e-6e5cc597503b\" (UID: \"fd926f78-827a-4b07-9f2e-6e5cc597503b\") " Feb 27 19:19:35 crc kubenswrapper[4981]: I0227 19:19:35.370665 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd926f78-827a-4b07-9f2e-6e5cc597503b-config-data\") pod \"fd926f78-827a-4b07-9f2e-6e5cc597503b\" (UID: \"fd926f78-827a-4b07-9f2e-6e5cc597503b\") " Feb 27 19:19:35 crc kubenswrapper[4981]: I0227 19:19:35.370755 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x77np\" (UniqueName: \"kubernetes.io/projected/fd926f78-827a-4b07-9f2e-6e5cc597503b-kube-api-access-x77np\") pod \"fd926f78-827a-4b07-9f2e-6e5cc597503b\" (UID: \"fd926f78-827a-4b07-9f2e-6e5cc597503b\") " Feb 27 19:19:35 crc kubenswrapper[4981]: I0227 19:19:35.370853 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd926f78-827a-4b07-9f2e-6e5cc597503b-combined-ca-bundle\") pod \"fd926f78-827a-4b07-9f2e-6e5cc597503b\" (UID: \"fd926f78-827a-4b07-9f2e-6e5cc597503b\") " Feb 27 19:19:35 crc kubenswrapper[4981]: I0227 19:19:35.371151 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd926f78-827a-4b07-9f2e-6e5cc597503b-logs" (OuterVolumeSpecName: "logs") pod "fd926f78-827a-4b07-9f2e-6e5cc597503b" (UID: "fd926f78-827a-4b07-9f2e-6e5cc597503b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:19:35 crc kubenswrapper[4981]: I0227 19:19:35.371666 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd926f78-827a-4b07-9f2e-6e5cc597503b-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:35 crc kubenswrapper[4981]: I0227 19:19:35.376349 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd926f78-827a-4b07-9f2e-6e5cc597503b-kube-api-access-x77np" (OuterVolumeSpecName: "kube-api-access-x77np") pod "fd926f78-827a-4b07-9f2e-6e5cc597503b" (UID: "fd926f78-827a-4b07-9f2e-6e5cc597503b"). InnerVolumeSpecName "kube-api-access-x77np". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:19:35 crc kubenswrapper[4981]: I0227 19:19:35.397778 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd926f78-827a-4b07-9f2e-6e5cc597503b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd926f78-827a-4b07-9f2e-6e5cc597503b" (UID: "fd926f78-827a-4b07-9f2e-6e5cc597503b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:35 crc kubenswrapper[4981]: I0227 19:19:35.418344 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd926f78-827a-4b07-9f2e-6e5cc597503b-config-data" (OuterVolumeSpecName: "config-data") pod "fd926f78-827a-4b07-9f2e-6e5cc597503b" (UID: "fd926f78-827a-4b07-9f2e-6e5cc597503b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:35 crc kubenswrapper[4981]: I0227 19:19:35.474244 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd926f78-827a-4b07-9f2e-6e5cc597503b-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:35 crc kubenswrapper[4981]: I0227 19:19:35.474269 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x77np\" (UniqueName: \"kubernetes.io/projected/fd926f78-827a-4b07-9f2e-6e5cc597503b-kube-api-access-x77np\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:35 crc kubenswrapper[4981]: I0227 19:19:35.474278 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd926f78-827a-4b07-9f2e-6e5cc597503b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:35 crc kubenswrapper[4981]: I0227 19:19:35.532208 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:35 crc kubenswrapper[4981]: I0227 19:19:35.550028 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:35 crc kubenswrapper[4981]: I0227 19:19:35.644030 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d0dde97-c1b5-4662-91b4-1e38716a6412" path="/var/lib/kubelet/pods/2d0dde97-c1b5-4662-91b4-1e38716a6412/volumes" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.252504 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.252586 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd926f78-827a-4b07-9f2e-6e5cc597503b","Type":"ContainerDied","Data":"c2f99eff485df665f94f3e4a380b09686d9a1c04b988b76f49964cc62be3076e"} Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.252983 4981 scope.go:117] "RemoveContainer" containerID="f3044abc7508bfb6556f45d024f34839b9771d12f1a7bfc877b55d882bcef680" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.278605 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.281842 4981 scope.go:117] "RemoveContainer" containerID="4b50cc042b7d09d497be5894e1f900b6ab7ff7355a1e2a418b0d54026934f618" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.282960 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.319156 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.330446 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 19:19:36 crc kubenswrapper[4981]: E0227 19:19:36.331077 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd926f78-827a-4b07-9f2e-6e5cc597503b" containerName="nova-api-api" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.331095 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd926f78-827a-4b07-9f2e-6e5cc597503b" containerName="nova-api-api" Feb 27 19:19:36 crc kubenswrapper[4981]: E0227 19:19:36.331113 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd926f78-827a-4b07-9f2e-6e5cc597503b" containerName="nova-api-log" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.331121 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd926f78-827a-4b07-9f2e-6e5cc597503b" containerName="nova-api-log" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.331370 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd926f78-827a-4b07-9f2e-6e5cc597503b" containerName="nova-api-api" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.331388 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd926f78-827a-4b07-9f2e-6e5cc597503b" containerName="nova-api-log" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.332669 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.337757 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.338026 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.338260 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.358440 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.497807 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-public-tls-certs\") pod \"nova-api-0\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.497876 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b83caac4-9f7d-48a8-8810-9e2051b94054-logs\") pod \"nova-api-0\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.497980 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.498335 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.498440 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-config-data\") pod \"nova-api-0\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.498565 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbcqz\" (UniqueName: \"kubernetes.io/projected/b83caac4-9f7d-48a8-8810-9e2051b94054-kube-api-access-xbcqz\") pod \"nova-api-0\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.517786 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-5pw8g"] Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.521619 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5pw8g" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.523550 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.523645 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.531821 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5pw8g"] Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.600854 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.600921 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-config-data\") pod \"nova-api-0\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.600974 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbcqz\" (UniqueName: \"kubernetes.io/projected/b83caac4-9f7d-48a8-8810-9e2051b94054-kube-api-access-xbcqz\") pod \"nova-api-0\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.600999 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-public-tls-certs\") pod \"nova-api-0\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.601038 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b83caac4-9f7d-48a8-8810-9e2051b94054-logs\") pod \"nova-api-0\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.601140 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.602427 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b83caac4-9f7d-48a8-8810-9e2051b94054-logs\") pod \"nova-api-0\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.606695 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-config-data\") pod \"nova-api-0\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.609217 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.609521 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.609669 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-public-tls-certs\") pod \"nova-api-0\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.617513 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbcqz\" (UniqueName: \"kubernetes.io/projected/b83caac4-9f7d-48a8-8810-9e2051b94054-kube-api-access-xbcqz\") pod \"nova-api-0\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.662929 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.705276 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4986e8e-fefc-4491-ba3f-9a85cf49472b-config-data\") pod \"nova-cell1-cell-mapping-5pw8g\" (UID: \"c4986e8e-fefc-4491-ba3f-9a85cf49472b\") " pod="openstack/nova-cell1-cell-mapping-5pw8g" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.705380 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv8ck\" (UniqueName: \"kubernetes.io/projected/c4986e8e-fefc-4491-ba3f-9a85cf49472b-kube-api-access-zv8ck\") pod \"nova-cell1-cell-mapping-5pw8g\" (UID: \"c4986e8e-fefc-4491-ba3f-9a85cf49472b\") " pod="openstack/nova-cell1-cell-mapping-5pw8g" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.705473 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4986e8e-fefc-4491-ba3f-9a85cf49472b-scripts\") pod \"nova-cell1-cell-mapping-5pw8g\" (UID: \"c4986e8e-fefc-4491-ba3f-9a85cf49472b\") " pod="openstack/nova-cell1-cell-mapping-5pw8g" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.705527 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4986e8e-fefc-4491-ba3f-9a85cf49472b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5pw8g\" (UID: \"c4986e8e-fefc-4491-ba3f-9a85cf49472b\") " pod="openstack/nova-cell1-cell-mapping-5pw8g" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.807370 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv8ck\" (UniqueName: \"kubernetes.io/projected/c4986e8e-fefc-4491-ba3f-9a85cf49472b-kube-api-access-zv8ck\") pod \"nova-cell1-cell-mapping-5pw8g\" (UID: \"c4986e8e-fefc-4491-ba3f-9a85cf49472b\") " pod="openstack/nova-cell1-cell-mapping-5pw8g" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.807491 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4986e8e-fefc-4491-ba3f-9a85cf49472b-scripts\") pod \"nova-cell1-cell-mapping-5pw8g\" (UID: \"c4986e8e-fefc-4491-ba3f-9a85cf49472b\") " pod="openstack/nova-cell1-cell-mapping-5pw8g" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.807546 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4986e8e-fefc-4491-ba3f-9a85cf49472b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5pw8g\" (UID: \"c4986e8e-fefc-4491-ba3f-9a85cf49472b\") " pod="openstack/nova-cell1-cell-mapping-5pw8g" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.807642 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4986e8e-fefc-4491-ba3f-9a85cf49472b-config-data\") pod \"nova-cell1-cell-mapping-5pw8g\" (UID: \"c4986e8e-fefc-4491-ba3f-9a85cf49472b\") " pod="openstack/nova-cell1-cell-mapping-5pw8g" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.813910 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4986e8e-fefc-4491-ba3f-9a85cf49472b-scripts\") pod \"nova-cell1-cell-mapping-5pw8g\" (UID: \"c4986e8e-fefc-4491-ba3f-9a85cf49472b\") " pod="openstack/nova-cell1-cell-mapping-5pw8g" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.814123 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4986e8e-fefc-4491-ba3f-9a85cf49472b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5pw8g\" (UID: \"c4986e8e-fefc-4491-ba3f-9a85cf49472b\") " pod="openstack/nova-cell1-cell-mapping-5pw8g" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.814127 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4986e8e-fefc-4491-ba3f-9a85cf49472b-config-data\") pod \"nova-cell1-cell-mapping-5pw8g\" (UID: \"c4986e8e-fefc-4491-ba3f-9a85cf49472b\") " pod="openstack/nova-cell1-cell-mapping-5pw8g" Feb 27 19:19:36 crc kubenswrapper[4981]: I0227 19:19:36.838028 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv8ck\" (UniqueName: \"kubernetes.io/projected/c4986e8e-fefc-4491-ba3f-9a85cf49472b-kube-api-access-zv8ck\") pod \"nova-cell1-cell-mapping-5pw8g\" (UID: \"c4986e8e-fefc-4491-ba3f-9a85cf49472b\") " pod="openstack/nova-cell1-cell-mapping-5pw8g" Feb 27 19:19:37 crc kubenswrapper[4981]: I0227 19:19:37.134345 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 19:19:37 crc kubenswrapper[4981]: I0227 19:19:37.136455 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5pw8g" Feb 27 19:19:37 crc kubenswrapper[4981]: I0227 19:19:37.262474 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b83caac4-9f7d-48a8-8810-9e2051b94054","Type":"ContainerStarted","Data":"ae7e8b7511c401e4e42076e61b00d57205d6ebf644d510636e5f58b6a0d3f732"} Feb 27 19:19:37 crc kubenswrapper[4981]: I0227 19:19:37.269425 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d200585-c61d-43f8-a17e-54f695df7dbe","Type":"ContainerStarted","Data":"bda927ae23d6de9d50708df6de11982ad0fda24fdecf895c9e04685dc88ac49b"} Feb 27 19:19:37 crc kubenswrapper[4981]: I0227 19:19:37.652284 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd926f78-827a-4b07-9f2e-6e5cc597503b" path="/var/lib/kubelet/pods/fd926f78-827a-4b07-9f2e-6e5cc597503b/volumes" Feb 27 19:19:37 crc kubenswrapper[4981]: I0227 19:19:37.674033 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5pw8g"] Feb 27 19:19:38 crc kubenswrapper[4981]: I0227 19:19:38.280349 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d200585-c61d-43f8-a17e-54f695df7dbe","Type":"ContainerStarted","Data":"52ab7ade36a7bea163dae4153632d761e9bf6f316544d5345a2f4fc82200a997"} Feb 27 19:19:38 crc kubenswrapper[4981]: I0227 19:19:38.284287 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b83caac4-9f7d-48a8-8810-9e2051b94054","Type":"ContainerStarted","Data":"24200f88623142b0a140b2a9d31a45570b2eefb6cf15660105a31ebb32cfb501"} Feb 27 19:19:38 crc kubenswrapper[4981]: I0227 19:19:38.284377 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b83caac4-9f7d-48a8-8810-9e2051b94054","Type":"ContainerStarted","Data":"54e7115e80c424447156b95216feed06665bcd47444fde136c93685c500f9839"} Feb 27 19:19:38 crc kubenswrapper[4981]: I0227 19:19:38.286464 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5pw8g" event={"ID":"c4986e8e-fefc-4491-ba3f-9a85cf49472b","Type":"ContainerStarted","Data":"504e7e59163e3827c50e33fcb947ea0b2ecd06d752de8106338c645cdfc2fc77"} Feb 27 19:19:38 crc kubenswrapper[4981]: I0227 19:19:38.286518 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5pw8g" event={"ID":"c4986e8e-fefc-4491-ba3f-9a85cf49472b","Type":"ContainerStarted","Data":"a67be46729b139650104ddbd24d87e413886362127a486c9ccd07af58d595a7c"} Feb 27 19:19:38 crc kubenswrapper[4981]: I0227 19:19:38.313158 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.313139393 podStartE2EDuration="2.313139393s" podCreationTimestamp="2026-02-27 19:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:19:38.303639032 +0000 UTC m=+2077.782420212" watchObservedRunningTime="2026-02-27 19:19:38.313139393 +0000 UTC m=+2077.791920553" Feb 27 19:19:38 crc kubenswrapper[4981]: I0227 19:19:38.326605 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-5pw8g" podStartSLOduration=2.326586268 podStartE2EDuration="2.326586268s" podCreationTimestamp="2026-02-27 19:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:19:38.319946423 +0000 UTC m=+2077.798727583" watchObservedRunningTime="2026-02-27 19:19:38.326586268 +0000 UTC m=+2077.805367428" Feb 27 19:19:38 crc kubenswrapper[4981]: I0227 19:19:38.710121 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:19:38 crc kubenswrapper[4981]: I0227 19:19:38.787486 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-tfm5p"] Feb 27 19:19:38 crc kubenswrapper[4981]: I0227 19:19:38.787849 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" podUID="5f0b93e5-3670-4873-8376-fdf1281ae2b4" containerName="dnsmasq-dns" containerID="cri-o://5875c960bc5861171fc7155ae6202ee5108b771f739acb2c3f048136bc2e6b8b" gracePeriod=10 Feb 27 19:19:39 crc kubenswrapper[4981]: I0227 19:19:39.301834 4981 generic.go:334] "Generic (PLEG): container finished" podID="5f0b93e5-3670-4873-8376-fdf1281ae2b4" containerID="5875c960bc5861171fc7155ae6202ee5108b771f739acb2c3f048136bc2e6b8b" exitCode=0 Feb 27 19:19:39 crc kubenswrapper[4981]: I0227 19:19:39.303695 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" event={"ID":"5f0b93e5-3670-4873-8376-fdf1281ae2b4","Type":"ContainerDied","Data":"5875c960bc5861171fc7155ae6202ee5108b771f739acb2c3f048136bc2e6b8b"} Feb 27 19:19:39 crc kubenswrapper[4981]: I0227 19:19:39.844092 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:19:39 crc kubenswrapper[4981]: I0227 19:19:39.972985 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-dns-svc\") pod \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " Feb 27 19:19:39 crc kubenswrapper[4981]: I0227 19:19:39.973612 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-ovsdbserver-sb\") pod \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " Feb 27 19:19:39 crc kubenswrapper[4981]: I0227 19:19:39.973687 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-dns-swift-storage-0\") pod \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " Feb 27 19:19:39 crc kubenswrapper[4981]: I0227 19:19:39.973758 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgf7g\" (UniqueName: \"kubernetes.io/projected/5f0b93e5-3670-4873-8376-fdf1281ae2b4-kube-api-access-vgf7g\") pod \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " Feb 27 19:19:39 crc kubenswrapper[4981]: I0227 19:19:39.973790 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-ovsdbserver-nb\") pod \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " Feb 27 19:19:39 crc kubenswrapper[4981]: I0227 19:19:39.973813 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-config\") pod \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\" (UID: \"5f0b93e5-3670-4873-8376-fdf1281ae2b4\") " Feb 27 19:19:39 crc kubenswrapper[4981]: I0227 19:19:39.978108 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f0b93e5-3670-4873-8376-fdf1281ae2b4-kube-api-access-vgf7g" (OuterVolumeSpecName: "kube-api-access-vgf7g") pod "5f0b93e5-3670-4873-8376-fdf1281ae2b4" (UID: "5f0b93e5-3670-4873-8376-fdf1281ae2b4"). InnerVolumeSpecName "kube-api-access-vgf7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:19:40 crc kubenswrapper[4981]: I0227 19:19:40.025258 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f0b93e5-3670-4873-8376-fdf1281ae2b4" (UID: "5f0b93e5-3670-4873-8376-fdf1281ae2b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:19:40 crc kubenswrapper[4981]: I0227 19:19:40.031593 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f0b93e5-3670-4873-8376-fdf1281ae2b4" (UID: "5f0b93e5-3670-4873-8376-fdf1281ae2b4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:19:40 crc kubenswrapper[4981]: I0227 19:19:40.033553 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-config" (OuterVolumeSpecName: "config") pod "5f0b93e5-3670-4873-8376-fdf1281ae2b4" (UID: "5f0b93e5-3670-4873-8376-fdf1281ae2b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:19:40 crc kubenswrapper[4981]: I0227 19:19:40.038100 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f0b93e5-3670-4873-8376-fdf1281ae2b4" (UID: "5f0b93e5-3670-4873-8376-fdf1281ae2b4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:19:40 crc kubenswrapper[4981]: I0227 19:19:40.043323 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5f0b93e5-3670-4873-8376-fdf1281ae2b4" (UID: "5f0b93e5-3670-4873-8376-fdf1281ae2b4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:19:40 crc kubenswrapper[4981]: I0227 19:19:40.078503 4981 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:40 crc kubenswrapper[4981]: I0227 19:19:40.078537 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgf7g\" (UniqueName: \"kubernetes.io/projected/5f0b93e5-3670-4873-8376-fdf1281ae2b4-kube-api-access-vgf7g\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:40 crc kubenswrapper[4981]: I0227 19:19:40.078550 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:40 crc kubenswrapper[4981]: I0227 19:19:40.078564 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:40 crc kubenswrapper[4981]: I0227 19:19:40.078573 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:40 crc kubenswrapper[4981]: I0227 19:19:40.078580 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f0b93e5-3670-4873-8376-fdf1281ae2b4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:40 crc kubenswrapper[4981]: I0227 19:19:40.313936 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d200585-c61d-43f8-a17e-54f695df7dbe","Type":"ContainerStarted","Data":"557acdada7a6927a2b4039b69f2529a1cfea5b22f511fe9433b3df0d998e6ebe"} Feb 27 19:19:40 crc kubenswrapper[4981]: I0227 19:19:40.316425 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" event={"ID":"5f0b93e5-3670-4873-8376-fdf1281ae2b4","Type":"ContainerDied","Data":"e986f89b43f7d489036a5e537a7b3773846435a4ce56f3b4bb7633d11f259ddc"} Feb 27 19:19:40 crc kubenswrapper[4981]: I0227 19:19:40.316476 4981 scope.go:117] "RemoveContainer" containerID="5875c960bc5861171fc7155ae6202ee5108b771f739acb2c3f048136bc2e6b8b" Feb 27 19:19:40 crc kubenswrapper[4981]: I0227 19:19:40.316507 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" Feb 27 19:19:40 crc kubenswrapper[4981]: I0227 19:19:40.336722 4981 scope.go:117] "RemoveContainer" containerID="ce5923375012ec6f5ac3abb4bd3da961e135a04c98a2f4682e7650a78d0ab345" Feb 27 19:19:40 crc kubenswrapper[4981]: I0227 19:19:40.367813 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-tfm5p"] Feb 27 19:19:40 crc kubenswrapper[4981]: I0227 19:19:40.379830 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-tfm5p"] Feb 27 19:19:41 crc kubenswrapper[4981]: I0227 19:19:41.640675 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f0b93e5-3670-4873-8376-fdf1281ae2b4" path="/var/lib/kubelet/pods/5f0b93e5-3670-4873-8376-fdf1281ae2b4/volumes" Feb 27 19:19:43 crc kubenswrapper[4981]: I0227 19:19:43.348813 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d200585-c61d-43f8-a17e-54f695df7dbe","Type":"ContainerStarted","Data":"fae968112d7a204ca91d2a8361567a64886536860a977043c0f6d6e84eeb765b"} Feb 27 19:19:43 crc kubenswrapper[4981]: I0227 19:19:43.349232 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 19:19:43 crc kubenswrapper[4981]: I0227 19:19:43.350515 4981 generic.go:334] "Generic (PLEG): container finished" podID="c4986e8e-fefc-4491-ba3f-9a85cf49472b" containerID="504e7e59163e3827c50e33fcb947ea0b2ecd06d752de8106338c645cdfc2fc77" exitCode=0 Feb 27 19:19:43 crc kubenswrapper[4981]: I0227 19:19:43.350569 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5pw8g" event={"ID":"c4986e8e-fefc-4491-ba3f-9a85cf49472b","Type":"ContainerDied","Data":"504e7e59163e3827c50e33fcb947ea0b2ecd06d752de8106338c645cdfc2fc77"} Feb 27 19:19:43 crc kubenswrapper[4981]: I0227 19:19:43.379900 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.836457416 podStartE2EDuration="9.379871311s" podCreationTimestamp="2026-02-27 19:19:34 +0000 UTC" firstStartedPulling="2026-02-27 19:19:35.15715426 +0000 UTC m=+2074.635935420" lastFinishedPulling="2026-02-27 19:19:42.700568115 +0000 UTC m=+2082.179349315" observedRunningTime="2026-02-27 19:19:43.374440274 +0000 UTC m=+2082.853221474" watchObservedRunningTime="2026-02-27 19:19:43.379871311 +0000 UTC m=+2082.858652511" Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:44.699278 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5pw8g" Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:44.784306 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4986e8e-fefc-4491-ba3f-9a85cf49472b-scripts\") pod \"c4986e8e-fefc-4491-ba3f-9a85cf49472b\" (UID: \"c4986e8e-fefc-4491-ba3f-9a85cf49472b\") " Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:44.784381 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4986e8e-fefc-4491-ba3f-9a85cf49472b-combined-ca-bundle\") pod \"c4986e8e-fefc-4491-ba3f-9a85cf49472b\" (UID: \"c4986e8e-fefc-4491-ba3f-9a85cf49472b\") " Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:44.784419 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv8ck\" (UniqueName: \"kubernetes.io/projected/c4986e8e-fefc-4491-ba3f-9a85cf49472b-kube-api-access-zv8ck\") pod \"c4986e8e-fefc-4491-ba3f-9a85cf49472b\" (UID: \"c4986e8e-fefc-4491-ba3f-9a85cf49472b\") " Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:44.784470 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4986e8e-fefc-4491-ba3f-9a85cf49472b-config-data\") pod \"c4986e8e-fefc-4491-ba3f-9a85cf49472b\" (UID: \"c4986e8e-fefc-4491-ba3f-9a85cf49472b\") " Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:44.790428 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4986e8e-fefc-4491-ba3f-9a85cf49472b-kube-api-access-zv8ck" (OuterVolumeSpecName: "kube-api-access-zv8ck") pod "c4986e8e-fefc-4491-ba3f-9a85cf49472b" (UID: "c4986e8e-fefc-4491-ba3f-9a85cf49472b"). InnerVolumeSpecName "kube-api-access-zv8ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:44.797299 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4986e8e-fefc-4491-ba3f-9a85cf49472b-scripts" (OuterVolumeSpecName: "scripts") pod "c4986e8e-fefc-4491-ba3f-9a85cf49472b" (UID: "c4986e8e-fefc-4491-ba3f-9a85cf49472b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:44.819591 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4986e8e-fefc-4491-ba3f-9a85cf49472b-config-data" (OuterVolumeSpecName: "config-data") pod "c4986e8e-fefc-4491-ba3f-9a85cf49472b" (UID: "c4986e8e-fefc-4491-ba3f-9a85cf49472b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:44.820922 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4986e8e-fefc-4491-ba3f-9a85cf49472b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4986e8e-fefc-4491-ba3f-9a85cf49472b" (UID: "c4986e8e-fefc-4491-ba3f-9a85cf49472b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:44.826996 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757b4f8459-tfm5p" podUID="5f0b93e5-3670-4873-8376-fdf1281ae2b4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.208:5353: i/o timeout" Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:44.887404 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4986e8e-fefc-4491-ba3f-9a85cf49472b-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:44.887437 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4986e8e-fefc-4491-ba3f-9a85cf49472b-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:44.887449 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4986e8e-fefc-4491-ba3f-9a85cf49472b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:44.887462 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv8ck\" (UniqueName: \"kubernetes.io/projected/c4986e8e-fefc-4491-ba3f-9a85cf49472b-kube-api-access-zv8ck\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:45.368901 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5pw8g" event={"ID":"c4986e8e-fefc-4491-ba3f-9a85cf49472b","Type":"ContainerDied","Data":"a67be46729b139650104ddbd24d87e413886362127a486c9ccd07af58d595a7c"} Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:45.369247 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a67be46729b139650104ddbd24d87e413886362127a486c9ccd07af58d595a7c" Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:45.368980 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5pw8g" Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:45.590200 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:45.590552 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b83caac4-9f7d-48a8-8810-9e2051b94054" containerName="nova-api-log" containerID="cri-o://54e7115e80c424447156b95216feed06665bcd47444fde136c93685c500f9839" gracePeriod=30 Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:45.590616 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b83caac4-9f7d-48a8-8810-9e2051b94054" containerName="nova-api-api" containerID="cri-o://24200f88623142b0a140b2a9d31a45570b2eefb6cf15660105a31ebb32cfb501" gracePeriod=30 Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:45.599818 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:45.600041 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="47c21807-0372-41ce-a60d-021a45429037" containerName="nova-scheduler-scheduler" containerID="cri-o://519d973b88c6da9ab8e2526937da89c47d2d8f93fded9a976c2ef5e0a0606a4c" gracePeriod=30 Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:45.618977 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:45.619467 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="af9c3f90-c49d-4d3f-9d4a-567f5683434b" containerName="nova-metadata-log" containerID="cri-o://166f3477f6451532c79a3cd47a6cb78dd4806dcbd36450c24fa17f776541affa" gracePeriod=30 Feb 27 19:19:45 crc kubenswrapper[4981]: I0227 19:19:45.619507 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="af9c3f90-c49d-4d3f-9d4a-567f5683434b" containerName="nova-metadata-metadata" containerID="cri-o://866b94f0fde1088c3ac2a9d2972b5bb9cc7b5fb09f1b0318f62b52074b51d020" gracePeriod=30 Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.249358 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.343836 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b83caac4-9f7d-48a8-8810-9e2051b94054-logs\") pod \"b83caac4-9f7d-48a8-8810-9e2051b94054\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.343885 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbcqz\" (UniqueName: \"kubernetes.io/projected/b83caac4-9f7d-48a8-8810-9e2051b94054-kube-api-access-xbcqz\") pod \"b83caac4-9f7d-48a8-8810-9e2051b94054\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.344021 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-internal-tls-certs\") pod \"b83caac4-9f7d-48a8-8810-9e2051b94054\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.344116 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-combined-ca-bundle\") pod \"b83caac4-9f7d-48a8-8810-9e2051b94054\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.344230 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-config-data\") pod \"b83caac4-9f7d-48a8-8810-9e2051b94054\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.344259 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-public-tls-certs\") pod \"b83caac4-9f7d-48a8-8810-9e2051b94054\" (UID: \"b83caac4-9f7d-48a8-8810-9e2051b94054\") " Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.345759 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b83caac4-9f7d-48a8-8810-9e2051b94054-logs" (OuterVolumeSpecName: "logs") pod "b83caac4-9f7d-48a8-8810-9e2051b94054" (UID: "b83caac4-9f7d-48a8-8810-9e2051b94054"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.350157 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b83caac4-9f7d-48a8-8810-9e2051b94054-kube-api-access-xbcqz" (OuterVolumeSpecName: "kube-api-access-xbcqz") pod "b83caac4-9f7d-48a8-8810-9e2051b94054" (UID: "b83caac4-9f7d-48a8-8810-9e2051b94054"). InnerVolumeSpecName "kube-api-access-xbcqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.386255 4981 generic.go:334] "Generic (PLEG): container finished" podID="b83caac4-9f7d-48a8-8810-9e2051b94054" containerID="24200f88623142b0a140b2a9d31a45570b2eefb6cf15660105a31ebb32cfb501" exitCode=0 Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.386285 4981 generic.go:334] "Generic (PLEG): container finished" podID="b83caac4-9f7d-48a8-8810-9e2051b94054" containerID="54e7115e80c424447156b95216feed06665bcd47444fde136c93685c500f9839" exitCode=143 Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.386347 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.386374 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b83caac4-9f7d-48a8-8810-9e2051b94054","Type":"ContainerDied","Data":"24200f88623142b0a140b2a9d31a45570b2eefb6cf15660105a31ebb32cfb501"} Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.386402 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b83caac4-9f7d-48a8-8810-9e2051b94054","Type":"ContainerDied","Data":"54e7115e80c424447156b95216feed06665bcd47444fde136c93685c500f9839"} Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.386412 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b83caac4-9f7d-48a8-8810-9e2051b94054","Type":"ContainerDied","Data":"ae7e8b7511c401e4e42076e61b00d57205d6ebf644d510636e5f58b6a0d3f732"} Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.386427 4981 scope.go:117] "RemoveContainer" containerID="24200f88623142b0a140b2a9d31a45570b2eefb6cf15660105a31ebb32cfb501" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.393713 4981 generic.go:334] "Generic (PLEG): container finished" podID="af9c3f90-c49d-4d3f-9d4a-567f5683434b" containerID="166f3477f6451532c79a3cd47a6cb78dd4806dcbd36450c24fa17f776541affa" exitCode=143 Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.393755 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af9c3f90-c49d-4d3f-9d4a-567f5683434b","Type":"ContainerDied","Data":"166f3477f6451532c79a3cd47a6cb78dd4806dcbd36450c24fa17f776541affa"} Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.401259 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-config-data" (OuterVolumeSpecName: "config-data") pod "b83caac4-9f7d-48a8-8810-9e2051b94054" (UID: "b83caac4-9f7d-48a8-8810-9e2051b94054"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.401285 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b83caac4-9f7d-48a8-8810-9e2051b94054" (UID: "b83caac4-9f7d-48a8-8810-9e2051b94054"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.401435 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b83caac4-9f7d-48a8-8810-9e2051b94054" (UID: "b83caac4-9f7d-48a8-8810-9e2051b94054"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.401462 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b83caac4-9f7d-48a8-8810-9e2051b94054" (UID: "b83caac4-9f7d-48a8-8810-9e2051b94054"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.418857 4981 scope.go:117] "RemoveContainer" containerID="54e7115e80c424447156b95216feed06665bcd47444fde136c93685c500f9839" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.448517 4981 scope.go:117] "RemoveContainer" containerID="24200f88623142b0a140b2a9d31a45570b2eefb6cf15660105a31ebb32cfb501" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.450223 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.450632 4981 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.450698 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b83caac4-9f7d-48a8-8810-9e2051b94054-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:46 crc kubenswrapper[4981]: E0227 19:19:46.450731 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24200f88623142b0a140b2a9d31a45570b2eefb6cf15660105a31ebb32cfb501\": container with ID starting with 24200f88623142b0a140b2a9d31a45570b2eefb6cf15660105a31ebb32cfb501 not found: ID does not exist" containerID="24200f88623142b0a140b2a9d31a45570b2eefb6cf15660105a31ebb32cfb501" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.450799 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24200f88623142b0a140b2a9d31a45570b2eefb6cf15660105a31ebb32cfb501"} err="failed to get container status \"24200f88623142b0a140b2a9d31a45570b2eefb6cf15660105a31ebb32cfb501\": rpc error: code = NotFound desc = could not find container \"24200f88623142b0a140b2a9d31a45570b2eefb6cf15660105a31ebb32cfb501\": container with ID starting with 24200f88623142b0a140b2a9d31a45570b2eefb6cf15660105a31ebb32cfb501 not found: ID does not exist" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.450836 4981 scope.go:117] "RemoveContainer" containerID="54e7115e80c424447156b95216feed06665bcd47444fde136c93685c500f9839" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.450749 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbcqz\" (UniqueName: \"kubernetes.io/projected/b83caac4-9f7d-48a8-8810-9e2051b94054-kube-api-access-xbcqz\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.451022 4981 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.451162 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b83caac4-9f7d-48a8-8810-9e2051b94054-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:46 crc kubenswrapper[4981]: E0227 19:19:46.451418 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54e7115e80c424447156b95216feed06665bcd47444fde136c93685c500f9839\": container with ID starting with 54e7115e80c424447156b95216feed06665bcd47444fde136c93685c500f9839 not found: ID does not exist" containerID="54e7115e80c424447156b95216feed06665bcd47444fde136c93685c500f9839" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.451507 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54e7115e80c424447156b95216feed06665bcd47444fde136c93685c500f9839"} err="failed to get container status \"54e7115e80c424447156b95216feed06665bcd47444fde136c93685c500f9839\": rpc error: code = NotFound desc = could not find container \"54e7115e80c424447156b95216feed06665bcd47444fde136c93685c500f9839\": container with ID starting with 54e7115e80c424447156b95216feed06665bcd47444fde136c93685c500f9839 not found: ID does not exist" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.451610 4981 scope.go:117] "RemoveContainer" containerID="24200f88623142b0a140b2a9d31a45570b2eefb6cf15660105a31ebb32cfb501" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.452018 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24200f88623142b0a140b2a9d31a45570b2eefb6cf15660105a31ebb32cfb501"} err="failed to get container status \"24200f88623142b0a140b2a9d31a45570b2eefb6cf15660105a31ebb32cfb501\": rpc error: code = NotFound desc = could not find container \"24200f88623142b0a140b2a9d31a45570b2eefb6cf15660105a31ebb32cfb501\": container with ID starting with 24200f88623142b0a140b2a9d31a45570b2eefb6cf15660105a31ebb32cfb501 not found: ID does not exist" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.452041 4981 scope.go:117] "RemoveContainer" containerID="54e7115e80c424447156b95216feed06665bcd47444fde136c93685c500f9839" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.452358 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54e7115e80c424447156b95216feed06665bcd47444fde136c93685c500f9839"} err="failed to get container status \"54e7115e80c424447156b95216feed06665bcd47444fde136c93685c500f9839\": rpc error: code = NotFound desc = could not find container \"54e7115e80c424447156b95216feed06665bcd47444fde136c93685c500f9839\": container with ID starting with 54e7115e80c424447156b95216feed06665bcd47444fde136c93685c500f9839 not found: ID does not exist" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.793461 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.808977 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.836909 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 19:19:46 crc kubenswrapper[4981]: E0227 19:19:46.837412 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b83caac4-9f7d-48a8-8810-9e2051b94054" containerName="nova-api-api" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.837431 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b83caac4-9f7d-48a8-8810-9e2051b94054" containerName="nova-api-api" Feb 27 19:19:46 crc kubenswrapper[4981]: E0227 19:19:46.837491 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b83caac4-9f7d-48a8-8810-9e2051b94054" containerName="nova-api-log" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.837498 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b83caac4-9f7d-48a8-8810-9e2051b94054" containerName="nova-api-log" Feb 27 19:19:46 crc kubenswrapper[4981]: E0227 19:19:46.837515 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4986e8e-fefc-4491-ba3f-9a85cf49472b" containerName="nova-manage" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.837521 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4986e8e-fefc-4491-ba3f-9a85cf49472b" containerName="nova-manage" Feb 27 19:19:46 crc kubenswrapper[4981]: E0227 19:19:46.837544 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f0b93e5-3670-4873-8376-fdf1281ae2b4" containerName="init" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.837549 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f0b93e5-3670-4873-8376-fdf1281ae2b4" containerName="init" Feb 27 19:19:46 crc kubenswrapper[4981]: E0227 19:19:46.837562 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f0b93e5-3670-4873-8376-fdf1281ae2b4" containerName="dnsmasq-dns" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.837569 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f0b93e5-3670-4873-8376-fdf1281ae2b4" containerName="dnsmasq-dns" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.837731 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="b83caac4-9f7d-48a8-8810-9e2051b94054" containerName="nova-api-api" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.837743 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4986e8e-fefc-4491-ba3f-9a85cf49472b" containerName="nova-manage" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.837758 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f0b93e5-3670-4873-8376-fdf1281ae2b4" containerName="dnsmasq-dns" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.837769 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="b83caac4-9f7d-48a8-8810-9e2051b94054" containerName="nova-api-log" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.838867 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.841860 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.842020 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.845635 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.860374 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-public-tls-certs\") pod \"nova-api-0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " pod="openstack/nova-api-0" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.860663 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " pod="openstack/nova-api-0" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.860798 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " pod="openstack/nova-api-0" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.860923 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faa3914e-426b-4791-8199-a7630729baf0-logs\") pod \"nova-api-0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " pod="openstack/nova-api-0" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.861022 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-config-data\") pod \"nova-api-0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " pod="openstack/nova-api-0" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.861233 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqgjt\" (UniqueName: \"kubernetes.io/projected/faa3914e-426b-4791-8199-a7630729baf0-kube-api-access-sqgjt\") pod \"nova-api-0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " pod="openstack/nova-api-0" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.905604 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.963657 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-public-tls-certs\") pod \"nova-api-0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " pod="openstack/nova-api-0" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.963707 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " pod="openstack/nova-api-0" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.963735 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faa3914e-426b-4791-8199-a7630729baf0-logs\") pod \"nova-api-0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " pod="openstack/nova-api-0" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.963750 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " pod="openstack/nova-api-0" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.963764 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-config-data\") pod \"nova-api-0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " pod="openstack/nova-api-0" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.963787 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqgjt\" (UniqueName: \"kubernetes.io/projected/faa3914e-426b-4791-8199-a7630729baf0-kube-api-access-sqgjt\") pod \"nova-api-0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " pod="openstack/nova-api-0" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.964494 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faa3914e-426b-4791-8199-a7630729baf0-logs\") pod \"nova-api-0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " pod="openstack/nova-api-0" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.969925 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " pod="openstack/nova-api-0" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.970583 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " pod="openstack/nova-api-0" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.970828 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-config-data\") pod \"nova-api-0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " pod="openstack/nova-api-0" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.979382 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-public-tls-certs\") pod \"nova-api-0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " pod="openstack/nova-api-0" Feb 27 19:19:46 crc kubenswrapper[4981]: I0227 19:19:46.985468 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqgjt\" (UniqueName: \"kubernetes.io/projected/faa3914e-426b-4791-8199-a7630729baf0-kube-api-access-sqgjt\") pod \"nova-api-0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " pod="openstack/nova-api-0" Feb 27 19:19:47 crc kubenswrapper[4981]: I0227 19:19:47.156037 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 19:19:47 crc kubenswrapper[4981]: E0227 19:19:47.380298 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 519d973b88c6da9ab8e2526937da89c47d2d8f93fded9a976c2ef5e0a0606a4c is running failed: container process not found" containerID="519d973b88c6da9ab8e2526937da89c47d2d8f93fded9a976c2ef5e0a0606a4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 19:19:47 crc kubenswrapper[4981]: E0227 19:19:47.386223 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 519d973b88c6da9ab8e2526937da89c47d2d8f93fded9a976c2ef5e0a0606a4c is running failed: container process not found" containerID="519d973b88c6da9ab8e2526937da89c47d2d8f93fded9a976c2ef5e0a0606a4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 19:19:47 crc kubenswrapper[4981]: E0227 19:19:47.399512 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 519d973b88c6da9ab8e2526937da89c47d2d8f93fded9a976c2ef5e0a0606a4c is running failed: container process not found" containerID="519d973b88c6da9ab8e2526937da89c47d2d8f93fded9a976c2ef5e0a0606a4c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 19:19:47 crc kubenswrapper[4981]: E0227 19:19:47.399613 4981 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 519d973b88c6da9ab8e2526937da89c47d2d8f93fded9a976c2ef5e0a0606a4c is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="47c21807-0372-41ce-a60d-021a45429037" containerName="nova-scheduler-scheduler" Feb 27 19:19:47 crc kubenswrapper[4981]: I0227 19:19:47.463576 4981 generic.go:334] "Generic (PLEG): container finished" podID="47c21807-0372-41ce-a60d-021a45429037" containerID="519d973b88c6da9ab8e2526937da89c47d2d8f93fded9a976c2ef5e0a0606a4c" exitCode=0 Feb 27 19:19:47 crc kubenswrapper[4981]: I0227 19:19:47.463643 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"47c21807-0372-41ce-a60d-021a45429037","Type":"ContainerDied","Data":"519d973b88c6da9ab8e2526937da89c47d2d8f93fded9a976c2ef5e0a0606a4c"} Feb 27 19:19:47 crc kubenswrapper[4981]: I0227 19:19:47.469798 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 19:19:47 crc kubenswrapper[4981]: I0227 19:19:47.588276 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47c21807-0372-41ce-a60d-021a45429037-config-data\") pod \"47c21807-0372-41ce-a60d-021a45429037\" (UID: \"47c21807-0372-41ce-a60d-021a45429037\") " Feb 27 19:19:47 crc kubenswrapper[4981]: I0227 19:19:47.588725 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq25q\" (UniqueName: \"kubernetes.io/projected/47c21807-0372-41ce-a60d-021a45429037-kube-api-access-dq25q\") pod \"47c21807-0372-41ce-a60d-021a45429037\" (UID: \"47c21807-0372-41ce-a60d-021a45429037\") " Feb 27 19:19:47 crc kubenswrapper[4981]: I0227 19:19:47.588786 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47c21807-0372-41ce-a60d-021a45429037-combined-ca-bundle\") pod \"47c21807-0372-41ce-a60d-021a45429037\" (UID: \"47c21807-0372-41ce-a60d-021a45429037\") " Feb 27 19:19:47 crc kubenswrapper[4981]: I0227 19:19:47.594361 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47c21807-0372-41ce-a60d-021a45429037-kube-api-access-dq25q" (OuterVolumeSpecName: "kube-api-access-dq25q") pod "47c21807-0372-41ce-a60d-021a45429037" (UID: "47c21807-0372-41ce-a60d-021a45429037"). InnerVolumeSpecName "kube-api-access-dq25q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:19:47 crc kubenswrapper[4981]: I0227 19:19:47.626185 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47c21807-0372-41ce-a60d-021a45429037-config-data" (OuterVolumeSpecName: "config-data") pod "47c21807-0372-41ce-a60d-021a45429037" (UID: "47c21807-0372-41ce-a60d-021a45429037"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:47 crc kubenswrapper[4981]: I0227 19:19:47.629929 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47c21807-0372-41ce-a60d-021a45429037-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47c21807-0372-41ce-a60d-021a45429037" (UID: "47c21807-0372-41ce-a60d-021a45429037"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:47 crc kubenswrapper[4981]: I0227 19:19:47.647449 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b83caac4-9f7d-48a8-8810-9e2051b94054" path="/var/lib/kubelet/pods/b83caac4-9f7d-48a8-8810-9e2051b94054/volumes" Feb 27 19:19:47 crc kubenswrapper[4981]: I0227 19:19:47.691236 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47c21807-0372-41ce-a60d-021a45429037-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:47 crc kubenswrapper[4981]: I0227 19:19:47.691281 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47c21807-0372-41ce-a60d-021a45429037-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:47 crc kubenswrapper[4981]: I0227 19:19:47.691294 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq25q\" (UniqueName: \"kubernetes.io/projected/47c21807-0372-41ce-a60d-021a45429037-kube-api-access-dq25q\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:47 crc kubenswrapper[4981]: I0227 19:19:47.802854 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.477000 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"faa3914e-426b-4791-8199-a7630729baf0","Type":"ContainerStarted","Data":"1ab68f63ecb4d970b493500d1f84ddfa479978e09bb4f5454405ac3cff3972ba"} Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.477406 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"faa3914e-426b-4791-8199-a7630729baf0","Type":"ContainerStarted","Data":"9b7e4dbef5e7da71bff472adbb75ceb0867028f6b271bd5767677e483d453167"} Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.477425 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"faa3914e-426b-4791-8199-a7630729baf0","Type":"ContainerStarted","Data":"76e7c251baa64099ef1d3e55c24774ded93b6d65f1163a380cd5bba177c13696"} Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.479325 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"47c21807-0372-41ce-a60d-021a45429037","Type":"ContainerDied","Data":"6f6c388e80a419e4222cbaabcaaf794744bfd1eb56f347c2b5114525851d67d5"} Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.479358 4981 scope.go:117] "RemoveContainer" containerID="519d973b88c6da9ab8e2526937da89c47d2d8f93fded9a976c2ef5e0a0606a4c" Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.479473 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.504963 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.504939393 podStartE2EDuration="2.504939393s" podCreationTimestamp="2026-02-27 19:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:19:48.496516324 +0000 UTC m=+2087.975297484" watchObservedRunningTime="2026-02-27 19:19:48.504939393 +0000 UTC m=+2087.983720553" Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.521309 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.536279 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.548113 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 19:19:48 crc kubenswrapper[4981]: E0227 19:19:48.548697 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47c21807-0372-41ce-a60d-021a45429037" containerName="nova-scheduler-scheduler" Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.548724 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c21807-0372-41ce-a60d-021a45429037" containerName="nova-scheduler-scheduler" Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.548923 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="47c21807-0372-41ce-a60d-021a45429037" containerName="nova-scheduler-scheduler" Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.549709 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.552758 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.570969 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.608272 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83a972b-9d9d-407c-a714-821900bc148e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d83a972b-9d9d-407c-a714-821900bc148e\") " pod="openstack/nova-scheduler-0" Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.608345 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83a972b-9d9d-407c-a714-821900bc148e-config-data\") pod \"nova-scheduler-0\" (UID: \"d83a972b-9d9d-407c-a714-821900bc148e\") " pod="openstack/nova-scheduler-0" Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.608432 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wklsc\" (UniqueName: \"kubernetes.io/projected/d83a972b-9d9d-407c-a714-821900bc148e-kube-api-access-wklsc\") pod \"nova-scheduler-0\" (UID: \"d83a972b-9d9d-407c-a714-821900bc148e\") " pod="openstack/nova-scheduler-0" Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.710224 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83a972b-9d9d-407c-a714-821900bc148e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d83a972b-9d9d-407c-a714-821900bc148e\") " pod="openstack/nova-scheduler-0" Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.710495 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83a972b-9d9d-407c-a714-821900bc148e-config-data\") pod \"nova-scheduler-0\" (UID: \"d83a972b-9d9d-407c-a714-821900bc148e\") " pod="openstack/nova-scheduler-0" Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.710732 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wklsc\" (UniqueName: \"kubernetes.io/projected/d83a972b-9d9d-407c-a714-821900bc148e-kube-api-access-wklsc\") pod \"nova-scheduler-0\" (UID: \"d83a972b-9d9d-407c-a714-821900bc148e\") " pod="openstack/nova-scheduler-0" Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.714387 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83a972b-9d9d-407c-a714-821900bc148e-config-data\") pod \"nova-scheduler-0\" (UID: \"d83a972b-9d9d-407c-a714-821900bc148e\") " pod="openstack/nova-scheduler-0" Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.714819 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83a972b-9d9d-407c-a714-821900bc148e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d83a972b-9d9d-407c-a714-821900bc148e\") " pod="openstack/nova-scheduler-0" Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.730235 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wklsc\" (UniqueName: \"kubernetes.io/projected/d83a972b-9d9d-407c-a714-821900bc148e-kube-api-access-wklsc\") pod \"nova-scheduler-0\" (UID: \"d83a972b-9d9d-407c-a714-821900bc148e\") " pod="openstack/nova-scheduler-0" Feb 27 19:19:48 crc kubenswrapper[4981]: I0227 19:19:48.865433 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.156960 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.222266 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af9c3f90-c49d-4d3f-9d4a-567f5683434b-logs\") pod \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\" (UID: \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\") " Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.222329 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2jrf\" (UniqueName: \"kubernetes.io/projected/af9c3f90-c49d-4d3f-9d4a-567f5683434b-kube-api-access-k2jrf\") pod \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\" (UID: \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\") " Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.222376 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af9c3f90-c49d-4d3f-9d4a-567f5683434b-combined-ca-bundle\") pod \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\" (UID: \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\") " Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.222486 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af9c3f90-c49d-4d3f-9d4a-567f5683434b-config-data\") pod \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\" (UID: \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\") " Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.222566 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/af9c3f90-c49d-4d3f-9d4a-567f5683434b-nova-metadata-tls-certs\") pod \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\" (UID: \"af9c3f90-c49d-4d3f-9d4a-567f5683434b\") " Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.224294 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af9c3f90-c49d-4d3f-9d4a-567f5683434b-logs" (OuterVolumeSpecName: "logs") pod "af9c3f90-c49d-4d3f-9d4a-567f5683434b" (UID: "af9c3f90-c49d-4d3f-9d4a-567f5683434b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.231963 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af9c3f90-c49d-4d3f-9d4a-567f5683434b-kube-api-access-k2jrf" (OuterVolumeSpecName: "kube-api-access-k2jrf") pod "af9c3f90-c49d-4d3f-9d4a-567f5683434b" (UID: "af9c3f90-c49d-4d3f-9d4a-567f5683434b"). InnerVolumeSpecName "kube-api-access-k2jrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.257282 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af9c3f90-c49d-4d3f-9d4a-567f5683434b-config-data" (OuterVolumeSpecName: "config-data") pod "af9c3f90-c49d-4d3f-9d4a-567f5683434b" (UID: "af9c3f90-c49d-4d3f-9d4a-567f5683434b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.261273 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af9c3f90-c49d-4d3f-9d4a-567f5683434b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af9c3f90-c49d-4d3f-9d4a-567f5683434b" (UID: "af9c3f90-c49d-4d3f-9d4a-567f5683434b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.289976 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af9c3f90-c49d-4d3f-9d4a-567f5683434b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "af9c3f90-c49d-4d3f-9d4a-567f5683434b" (UID: "af9c3f90-c49d-4d3f-9d4a-567f5683434b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.325196 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af9c3f90-c49d-4d3f-9d4a-567f5683434b-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.325231 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2jrf\" (UniqueName: \"kubernetes.io/projected/af9c3f90-c49d-4d3f-9d4a-567f5683434b-kube-api-access-k2jrf\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.325243 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af9c3f90-c49d-4d3f-9d4a-567f5683434b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.325252 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af9c3f90-c49d-4d3f-9d4a-567f5683434b-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.325269 4981 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/af9c3f90-c49d-4d3f-9d4a-567f5683434b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:19:49 crc kubenswrapper[4981]: W0227 19:19:49.359416 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd83a972b_9d9d_407c_a714_821900bc148e.slice/crio-f34379611e7efadca5dc02f554802ae5f8ffe6db4e8a3e8ed4ecdedee9c0e2e1 WatchSource:0}: Error finding container f34379611e7efadca5dc02f554802ae5f8ffe6db4e8a3e8ed4ecdedee9c0e2e1: Status 404 returned error can't find the container with id f34379611e7efadca5dc02f554802ae5f8ffe6db4e8a3e8ed4ecdedee9c0e2e1 Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.362403 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.490558 4981 generic.go:334] "Generic (PLEG): container finished" podID="af9c3f90-c49d-4d3f-9d4a-567f5683434b" containerID="866b94f0fde1088c3ac2a9d2972b5bb9cc7b5fb09f1b0318f62b52074b51d020" exitCode=0 Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.490637 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.490684 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af9c3f90-c49d-4d3f-9d4a-567f5683434b","Type":"ContainerDied","Data":"866b94f0fde1088c3ac2a9d2972b5bb9cc7b5fb09f1b0318f62b52074b51d020"} Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.490733 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af9c3f90-c49d-4d3f-9d4a-567f5683434b","Type":"ContainerDied","Data":"adb8060916a0ec1de7fcdb56cf5058306bfcd73a65a3d647f566d71cc499dd7f"} Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.490758 4981 scope.go:117] "RemoveContainer" containerID="866b94f0fde1088c3ac2a9d2972b5bb9cc7b5fb09f1b0318f62b52074b51d020" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.493278 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d83a972b-9d9d-407c-a714-821900bc148e","Type":"ContainerStarted","Data":"f34379611e7efadca5dc02f554802ae5f8ffe6db4e8a3e8ed4ecdedee9c0e2e1"} Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.516404 4981 scope.go:117] "RemoveContainer" containerID="166f3477f6451532c79a3cd47a6cb78dd4806dcbd36450c24fa17f776541affa" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.530716 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.553231 4981 scope.go:117] "RemoveContainer" containerID="866b94f0fde1088c3ac2a9d2972b5bb9cc7b5fb09f1b0318f62b52074b51d020" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.555134 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:19:49 crc kubenswrapper[4981]: E0227 19:19:49.561379 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"866b94f0fde1088c3ac2a9d2972b5bb9cc7b5fb09f1b0318f62b52074b51d020\": container with ID starting with 866b94f0fde1088c3ac2a9d2972b5bb9cc7b5fb09f1b0318f62b52074b51d020 not found: ID does not exist" containerID="866b94f0fde1088c3ac2a9d2972b5bb9cc7b5fb09f1b0318f62b52074b51d020" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.561426 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"866b94f0fde1088c3ac2a9d2972b5bb9cc7b5fb09f1b0318f62b52074b51d020"} err="failed to get container status \"866b94f0fde1088c3ac2a9d2972b5bb9cc7b5fb09f1b0318f62b52074b51d020\": rpc error: code = NotFound desc = could not find container \"866b94f0fde1088c3ac2a9d2972b5bb9cc7b5fb09f1b0318f62b52074b51d020\": container with ID starting with 866b94f0fde1088c3ac2a9d2972b5bb9cc7b5fb09f1b0318f62b52074b51d020 not found: ID does not exist" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.561453 4981 scope.go:117] "RemoveContainer" containerID="166f3477f6451532c79a3cd47a6cb78dd4806dcbd36450c24fa17f776541affa" Feb 27 19:19:49 crc kubenswrapper[4981]: E0227 19:19:49.561926 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"166f3477f6451532c79a3cd47a6cb78dd4806dcbd36450c24fa17f776541affa\": container with ID starting with 166f3477f6451532c79a3cd47a6cb78dd4806dcbd36450c24fa17f776541affa not found: ID does not exist" containerID="166f3477f6451532c79a3cd47a6cb78dd4806dcbd36450c24fa17f776541affa" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.561948 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"166f3477f6451532c79a3cd47a6cb78dd4806dcbd36450c24fa17f776541affa"} err="failed to get container status \"166f3477f6451532c79a3cd47a6cb78dd4806dcbd36450c24fa17f776541affa\": rpc error: code = NotFound desc = could not find container \"166f3477f6451532c79a3cd47a6cb78dd4806dcbd36450c24fa17f776541affa\": container with ID starting with 166f3477f6451532c79a3cd47a6cb78dd4806dcbd36450c24fa17f776541affa not found: ID does not exist" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.568474 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:19:49 crc kubenswrapper[4981]: E0227 19:19:49.568990 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af9c3f90-c49d-4d3f-9d4a-567f5683434b" containerName="nova-metadata-log" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.569010 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="af9c3f90-c49d-4d3f-9d4a-567f5683434b" containerName="nova-metadata-log" Feb 27 19:19:49 crc kubenswrapper[4981]: E0227 19:19:49.569026 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af9c3f90-c49d-4d3f-9d4a-567f5683434b" containerName="nova-metadata-metadata" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.569033 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="af9c3f90-c49d-4d3f-9d4a-567f5683434b" containerName="nova-metadata-metadata" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.569266 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="af9c3f90-c49d-4d3f-9d4a-567f5683434b" containerName="nova-metadata-metadata" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.569287 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="af9c3f90-c49d-4d3f-9d4a-567f5683434b" containerName="nova-metadata-log" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.570305 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.582250 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.583751 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.583838 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.638327 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c7f2b23-f800-4970-b530-aac7387e0936-logs\") pod \"nova-metadata-0\" (UID: \"0c7f2b23-f800-4970-b530-aac7387e0936\") " pod="openstack/nova-metadata-0" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.638400 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5gt2\" (UniqueName: \"kubernetes.io/projected/0c7f2b23-f800-4970-b530-aac7387e0936-kube-api-access-r5gt2\") pod \"nova-metadata-0\" (UID: \"0c7f2b23-f800-4970-b530-aac7387e0936\") " pod="openstack/nova-metadata-0" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.639196 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c7f2b23-f800-4970-b530-aac7387e0936-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c7f2b23-f800-4970-b530-aac7387e0936\") " pod="openstack/nova-metadata-0" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.639243 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c7f2b23-f800-4970-b530-aac7387e0936-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0c7f2b23-f800-4970-b530-aac7387e0936\") " pod="openstack/nova-metadata-0" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.639370 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c7f2b23-f800-4970-b530-aac7387e0936-config-data\") pod \"nova-metadata-0\" (UID: \"0c7f2b23-f800-4970-b530-aac7387e0936\") " pod="openstack/nova-metadata-0" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.647133 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47c21807-0372-41ce-a60d-021a45429037" path="/var/lib/kubelet/pods/47c21807-0372-41ce-a60d-021a45429037/volumes" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.647766 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af9c3f90-c49d-4d3f-9d4a-567f5683434b" path="/var/lib/kubelet/pods/af9c3f90-c49d-4d3f-9d4a-567f5683434b/volumes" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.741610 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c7f2b23-f800-4970-b530-aac7387e0936-logs\") pod \"nova-metadata-0\" (UID: \"0c7f2b23-f800-4970-b530-aac7387e0936\") " pod="openstack/nova-metadata-0" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.742006 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5gt2\" (UniqueName: \"kubernetes.io/projected/0c7f2b23-f800-4970-b530-aac7387e0936-kube-api-access-r5gt2\") pod \"nova-metadata-0\" (UID: \"0c7f2b23-f800-4970-b530-aac7387e0936\") " pod="openstack/nova-metadata-0" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.742092 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c7f2b23-f800-4970-b530-aac7387e0936-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c7f2b23-f800-4970-b530-aac7387e0936\") " pod="openstack/nova-metadata-0" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.742113 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c7f2b23-f800-4970-b530-aac7387e0936-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0c7f2b23-f800-4970-b530-aac7387e0936\") " pod="openstack/nova-metadata-0" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.742134 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c7f2b23-f800-4970-b530-aac7387e0936-logs\") pod \"nova-metadata-0\" (UID: \"0c7f2b23-f800-4970-b530-aac7387e0936\") " pod="openstack/nova-metadata-0" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.742349 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c7f2b23-f800-4970-b530-aac7387e0936-config-data\") pod \"nova-metadata-0\" (UID: \"0c7f2b23-f800-4970-b530-aac7387e0936\") " pod="openstack/nova-metadata-0" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.747170 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c7f2b23-f800-4970-b530-aac7387e0936-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0c7f2b23-f800-4970-b530-aac7387e0936\") " pod="openstack/nova-metadata-0" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.747588 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c7f2b23-f800-4970-b530-aac7387e0936-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0c7f2b23-f800-4970-b530-aac7387e0936\") " pod="openstack/nova-metadata-0" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.748039 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c7f2b23-f800-4970-b530-aac7387e0936-config-data\") pod \"nova-metadata-0\" (UID: \"0c7f2b23-f800-4970-b530-aac7387e0936\") " pod="openstack/nova-metadata-0" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.761818 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5gt2\" (UniqueName: \"kubernetes.io/projected/0c7f2b23-f800-4970-b530-aac7387e0936-kube-api-access-r5gt2\") pod \"nova-metadata-0\" (UID: \"0c7f2b23-f800-4970-b530-aac7387e0936\") " pod="openstack/nova-metadata-0" Feb 27 19:19:49 crc kubenswrapper[4981]: I0227 19:19:49.919873 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 19:19:50 crc kubenswrapper[4981]: I0227 19:19:50.249301 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:19:50 crc kubenswrapper[4981]: I0227 19:19:50.249746 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:19:50 crc kubenswrapper[4981]: I0227 19:19:50.355794 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:19:50 crc kubenswrapper[4981]: I0227 19:19:50.502292 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c7f2b23-f800-4970-b530-aac7387e0936","Type":"ContainerStarted","Data":"b68e0bd12b334c4ff9b7ac5e9cc423457c71f31b3839b31ec2c6e093fac9743d"} Feb 27 19:19:50 crc kubenswrapper[4981]: I0227 19:19:50.502345 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c7f2b23-f800-4970-b530-aac7387e0936","Type":"ContainerStarted","Data":"4cc9008b747e4ec01afb5068786242822aa0e10c218b91d78282d277f02eb97b"} Feb 27 19:19:50 crc kubenswrapper[4981]: I0227 19:19:50.504297 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d83a972b-9d9d-407c-a714-821900bc148e","Type":"ContainerStarted","Data":"3152c46cc349f7837726327bf3254bd69f1fb1bbbe347a8afb428e8a80072528"} Feb 27 19:19:50 crc kubenswrapper[4981]: I0227 19:19:50.531808 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.531789248 podStartE2EDuration="2.531789248s" podCreationTimestamp="2026-02-27 19:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:19:50.526416163 +0000 UTC m=+2090.005197343" watchObservedRunningTime="2026-02-27 19:19:50.531789248 +0000 UTC m=+2090.010570408" Feb 27 19:19:51 crc kubenswrapper[4981]: I0227 19:19:51.518295 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c7f2b23-f800-4970-b530-aac7387e0936","Type":"ContainerStarted","Data":"c360cdd61e3163e8b02b644a2169bebd548e95d9e6f4be8bc924e36168d7c4bd"} Feb 27 19:19:51 crc kubenswrapper[4981]: I0227 19:19:51.546776 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.546759993 podStartE2EDuration="2.546759993s" podCreationTimestamp="2026-02-27 19:19:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:19:51.535819457 +0000 UTC m=+2091.014600617" watchObservedRunningTime="2026-02-27 19:19:51.546759993 +0000 UTC m=+2091.025541153" Feb 27 19:19:53 crc kubenswrapper[4981]: I0227 19:19:53.865824 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 27 19:19:54 crc kubenswrapper[4981]: I0227 19:19:54.921385 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 19:19:54 crc kubenswrapper[4981]: I0227 19:19:54.921686 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 19:19:57 crc kubenswrapper[4981]: I0227 19:19:57.157784 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 19:19:57 crc kubenswrapper[4981]: I0227 19:19:57.159102 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 19:19:58 crc kubenswrapper[4981]: I0227 19:19:58.171328 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="faa3914e-426b-4791-8199-a7630729baf0" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.222:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 19:19:58 crc kubenswrapper[4981]: I0227 19:19:58.171720 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="faa3914e-426b-4791-8199-a7630729baf0" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.222:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 19:19:58 crc kubenswrapper[4981]: I0227 19:19:58.866083 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 27 19:19:58 crc kubenswrapper[4981]: I0227 19:19:58.905049 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 27 19:19:59 crc kubenswrapper[4981]: I0227 19:19:59.639783 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 27 19:19:59 crc kubenswrapper[4981]: I0227 19:19:59.920853 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 19:19:59 crc kubenswrapper[4981]: I0227 19:19:59.922196 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 19:20:00 crc kubenswrapper[4981]: I0227 19:20:00.188800 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537000-sn5s2"] Feb 27 19:20:00 crc kubenswrapper[4981]: I0227 19:20:00.190287 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537000-sn5s2" Feb 27 19:20:00 crc kubenswrapper[4981]: I0227 19:20:00.193149 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:20:00 crc kubenswrapper[4981]: I0227 19:20:00.194370 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:20:00 crc kubenswrapper[4981]: I0227 19:20:00.202565 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:20:00 crc kubenswrapper[4981]: I0227 19:20:00.239743 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7b59\" (UniqueName: \"kubernetes.io/projected/bf7d5ec0-a275-4956-8a1d-9455ffd87ee5-kube-api-access-n7b59\") pod \"auto-csr-approver-29537000-sn5s2\" (UID: \"bf7d5ec0-a275-4956-8a1d-9455ffd87ee5\") " pod="openshift-infra/auto-csr-approver-29537000-sn5s2" Feb 27 19:20:00 crc kubenswrapper[4981]: I0227 19:20:00.254393 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537000-sn5s2"] Feb 27 19:20:00 crc kubenswrapper[4981]: I0227 19:20:00.341416 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7b59\" (UniqueName: \"kubernetes.io/projected/bf7d5ec0-a275-4956-8a1d-9455ffd87ee5-kube-api-access-n7b59\") pod \"auto-csr-approver-29537000-sn5s2\" (UID: \"bf7d5ec0-a275-4956-8a1d-9455ffd87ee5\") " pod="openshift-infra/auto-csr-approver-29537000-sn5s2" Feb 27 19:20:00 crc kubenswrapper[4981]: I0227 19:20:00.374787 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7b59\" (UniqueName: \"kubernetes.io/projected/bf7d5ec0-a275-4956-8a1d-9455ffd87ee5-kube-api-access-n7b59\") pod \"auto-csr-approver-29537000-sn5s2\" (UID: \"bf7d5ec0-a275-4956-8a1d-9455ffd87ee5\") " pod="openshift-infra/auto-csr-approver-29537000-sn5s2" Feb 27 19:20:00 crc kubenswrapper[4981]: I0227 19:20:00.510079 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537000-sn5s2" Feb 27 19:20:00 crc kubenswrapper[4981]: I0227 19:20:00.931211 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0c7f2b23-f800-4970-b530-aac7387e0936" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 19:20:00 crc kubenswrapper[4981]: I0227 19:20:00.931293 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0c7f2b23-f800-4970-b530-aac7387e0936" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 19:20:00 crc kubenswrapper[4981]: I0227 19:20:00.979687 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537000-sn5s2"] Feb 27 19:20:00 crc kubenswrapper[4981]: I0227 19:20:00.988200 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:20:01 crc kubenswrapper[4981]: I0227 19:20:01.658545 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537000-sn5s2" event={"ID":"bf7d5ec0-a275-4956-8a1d-9455ffd87ee5","Type":"ContainerStarted","Data":"62d8f3f248fe7b07568bf3127a817ac42d5763b83b205e291dac309e5d97be73"} Feb 27 19:20:02 crc kubenswrapper[4981]: I0227 19:20:02.669044 4981 generic.go:334] "Generic (PLEG): container finished" podID="bf7d5ec0-a275-4956-8a1d-9455ffd87ee5" containerID="90fe2973c9b0433a16b5e1e8b5ca508e7716c85505362b6649ccbd173f33b91c" exitCode=0 Feb 27 19:20:02 crc kubenswrapper[4981]: I0227 19:20:02.669157 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537000-sn5s2" event={"ID":"bf7d5ec0-a275-4956-8a1d-9455ffd87ee5","Type":"ContainerDied","Data":"90fe2973c9b0433a16b5e1e8b5ca508e7716c85505362b6649ccbd173f33b91c"} Feb 27 19:20:04 crc kubenswrapper[4981]: I0227 19:20:04.015908 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537000-sn5s2" Feb 27 19:20:04 crc kubenswrapper[4981]: I0227 19:20:04.134020 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7b59\" (UniqueName: \"kubernetes.io/projected/bf7d5ec0-a275-4956-8a1d-9455ffd87ee5-kube-api-access-n7b59\") pod \"bf7d5ec0-a275-4956-8a1d-9455ffd87ee5\" (UID: \"bf7d5ec0-a275-4956-8a1d-9455ffd87ee5\") " Feb 27 19:20:04 crc kubenswrapper[4981]: I0227 19:20:04.139475 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf7d5ec0-a275-4956-8a1d-9455ffd87ee5-kube-api-access-n7b59" (OuterVolumeSpecName: "kube-api-access-n7b59") pod "bf7d5ec0-a275-4956-8a1d-9455ffd87ee5" (UID: "bf7d5ec0-a275-4956-8a1d-9455ffd87ee5"). InnerVolumeSpecName "kube-api-access-n7b59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:04 crc kubenswrapper[4981]: I0227 19:20:04.237249 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7b59\" (UniqueName: \"kubernetes.io/projected/bf7d5ec0-a275-4956-8a1d-9455ffd87ee5-kube-api-access-n7b59\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:04 crc kubenswrapper[4981]: I0227 19:20:04.665362 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 27 19:20:04 crc kubenswrapper[4981]: I0227 19:20:04.707109 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537000-sn5s2" event={"ID":"bf7d5ec0-a275-4956-8a1d-9455ffd87ee5","Type":"ContainerDied","Data":"62d8f3f248fe7b07568bf3127a817ac42d5763b83b205e291dac309e5d97be73"} Feb 27 19:20:04 crc kubenswrapper[4981]: I0227 19:20:04.707454 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62d8f3f248fe7b07568bf3127a817ac42d5763b83b205e291dac309e5d97be73" Feb 27 19:20:04 crc kubenswrapper[4981]: I0227 19:20:04.707254 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537000-sn5s2" Feb 27 19:20:05 crc kubenswrapper[4981]: I0227 19:20:05.080986 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536994-kg6pv"] Feb 27 19:20:05 crc kubenswrapper[4981]: I0227 19:20:05.090946 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536994-kg6pv"] Feb 27 19:20:05 crc kubenswrapper[4981]: I0227 19:20:05.640105 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91e1417f-019e-484a-afd2-05ae98b58cee" path="/var/lib/kubelet/pods/91e1417f-019e-484a-afd2-05ae98b58cee/volumes" Feb 27 19:20:07 crc kubenswrapper[4981]: I0227 19:20:07.163528 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 19:20:07 crc kubenswrapper[4981]: I0227 19:20:07.164133 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 19:20:07 crc kubenswrapper[4981]: I0227 19:20:07.164511 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 19:20:07 crc kubenswrapper[4981]: I0227 19:20:07.169803 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 19:20:07 crc kubenswrapper[4981]: I0227 19:20:07.740445 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 19:20:07 crc kubenswrapper[4981]: I0227 19:20:07.746822 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 19:20:09 crc kubenswrapper[4981]: I0227 19:20:09.927038 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 19:20:09 crc kubenswrapper[4981]: I0227 19:20:09.928978 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 19:20:09 crc kubenswrapper[4981]: I0227 19:20:09.932590 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 19:20:10 crc kubenswrapper[4981]: I0227 19:20:10.769849 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 19:20:12 crc kubenswrapper[4981]: I0227 19:20:12.080907 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-2srwk"] Feb 27 19:20:12 crc kubenswrapper[4981]: I0227 19:20:12.093461 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-2srwk"] Feb 27 19:20:13 crc kubenswrapper[4981]: I0227 19:20:13.638912 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b46d876a-df60-46ef-a33d-6f2ddb4261f6" path="/var/lib/kubelet/pods/b46d876a-df60-46ef-a33d-6f2ddb4261f6/volumes" Feb 27 19:20:20 crc kubenswrapper[4981]: I0227 19:20:20.105941 4981 scope.go:117] "RemoveContainer" containerID="a5d27a8ad6164dc28ebcc1185e3ce3c12770dba5e450d93a6006b8cf3e9f549a" Feb 27 19:20:20 crc kubenswrapper[4981]: I0227 19:20:20.249025 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:20:20 crc kubenswrapper[4981]: I0227 19:20:20.249407 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:20:23 crc kubenswrapper[4981]: I0227 19:20:23.031324 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-m4gk2"] Feb 27 19:20:23 crc kubenswrapper[4981]: I0227 19:20:23.041911 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-m4gk2"] Feb 27 19:20:23 crc kubenswrapper[4981]: I0227 19:20:23.641457 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ba26f0-21ac-43b2-a954-3ab2b764cc7d" path="/var/lib/kubelet/pods/67ba26f0-21ac-43b2-a954-3ab2b764cc7d/volumes" Feb 27 19:20:29 crc kubenswrapper[4981]: I0227 19:20:29.036863 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-spmns"] Feb 27 19:20:29 crc kubenswrapper[4981]: I0227 19:20:29.048656 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-spmns"] Feb 27 19:20:29 crc kubenswrapper[4981]: I0227 19:20:29.642261 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dd430a2-0c5e-4acc-9123-6bee2f09aa67" path="/var/lib/kubelet/pods/1dd430a2-0c5e-4acc-9123-6bee2f09aa67/volumes" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.069158 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e72a-account-create-update-fdvkk"] Feb 27 19:20:31 crc kubenswrapper[4981]: E0227 19:20:31.070674 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf7d5ec0-a275-4956-8a1d-9455ffd87ee5" containerName="oc" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.070742 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf7d5ec0-a275-4956-8a1d-9455ffd87ee5" containerName="oc" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.070974 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf7d5ec0-a275-4956-8a1d-9455ffd87ee5" containerName="oc" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.071658 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e72a-account-create-update-fdvkk" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.113448 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.125856 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e72a-account-create-update-fdvkk"] Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.179395 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bf5661d-549c-4591-8f93-02bc09f63f29-operator-scripts\") pod \"glance-e72a-account-create-update-fdvkk\" (UID: \"5bf5661d-549c-4591-8f93-02bc09f63f29\") " pod="openstack/glance-e72a-account-create-update-fdvkk" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.179518 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hflmp\" (UniqueName: \"kubernetes.io/projected/5bf5661d-549c-4591-8f93-02bc09f63f29-kube-api-access-hflmp\") pod \"glance-e72a-account-create-update-fdvkk\" (UID: \"5bf5661d-549c-4591-8f93-02bc09f63f29\") " pod="openstack/glance-e72a-account-create-update-fdvkk" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.243833 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.244102 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="6047b4ff-4778-43fd-8d8e-c84b76ff271e" containerName="openstackclient" containerID="cri-o://ad16f6eebbf2beec846b93606e3bf6e09e066977ec08c59d3b1d01db8b58e6a8" gracePeriod=2 Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.275134 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.280974 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hflmp\" (UniqueName: \"kubernetes.io/projected/5bf5661d-549c-4591-8f93-02bc09f63f29-kube-api-access-hflmp\") pod \"glance-e72a-account-create-update-fdvkk\" (UID: \"5bf5661d-549c-4591-8f93-02bc09f63f29\") " pod="openstack/glance-e72a-account-create-update-fdvkk" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.281156 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bf5661d-549c-4591-8f93-02bc09f63f29-operator-scripts\") pod \"glance-e72a-account-create-update-fdvkk\" (UID: \"5bf5661d-549c-4591-8f93-02bc09f63f29\") " pod="openstack/glance-e72a-account-create-update-fdvkk" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.282169 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bf5661d-549c-4591-8f93-02bc09f63f29-operator-scripts\") pod \"glance-e72a-account-create-update-fdvkk\" (UID: \"5bf5661d-549c-4591-8f93-02bc09f63f29\") " pod="openstack/glance-e72a-account-create-update-fdvkk" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.306568 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-d9wv4"] Feb 27 19:20:31 crc kubenswrapper[4981]: E0227 19:20:31.307193 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6047b4ff-4778-43fd-8d8e-c84b76ff271e" containerName="openstackclient" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.307213 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="6047b4ff-4778-43fd-8d8e-c84b76ff271e" containerName="openstackclient" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.307422 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="6047b4ff-4778-43fd-8d8e-c84b76ff271e" containerName="openstackclient" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.308277 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d9wv4" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.334139 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d9wv4"] Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.335687 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.357575 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hflmp\" (UniqueName: \"kubernetes.io/projected/5bf5661d-549c-4591-8f93-02bc09f63f29-kube-api-access-hflmp\") pod \"glance-e72a-account-create-update-fdvkk\" (UID: \"5bf5661d-549c-4591-8f93-02bc09f63f29\") " pod="openstack/glance-e72a-account-create-update-fdvkk" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.361152 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-badb-account-create-update-4hf5f"] Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.391139 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-badb-account-create-update-4hf5f"] Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.413208 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9966-account-create-update-v784d"] Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.414964 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9966-account-create-update-v784d" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.417894 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.426962 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9966-account-create-update-v784d"] Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.432346 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e72a-account-create-update-fdvkk" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.479469 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-badb-account-create-update-m5cpb"] Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.480928 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-badb-account-create-update-m5cpb" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.483326 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.487229 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgqw5\" (UniqueName: \"kubernetes.io/projected/75dabc26-0258-4ef3-b0c8-04231f2fa5c5-kube-api-access-lgqw5\") pod \"root-account-create-update-d9wv4\" (UID: \"75dabc26-0258-4ef3-b0c8-04231f2fa5c5\") " pod="openstack/root-account-create-update-d9wv4" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.487413 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75dabc26-0258-4ef3-b0c8-04231f2fa5c5-operator-scripts\") pod \"root-account-create-update-d9wv4\" (UID: \"75dabc26-0258-4ef3-b0c8-04231f2fa5c5\") " pod="openstack/root-account-create-update-d9wv4" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.504282 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-badb-account-create-update-m5cpb"] Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.547176 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-bbcc-account-create-update-rw9k7"] Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.585130 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-bbcc-account-create-update-rw9k7"] Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.589438 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bblrm\" (UniqueName: \"kubernetes.io/projected/71b893f8-fc1b-4dba-b63a-3c759969ae3c-kube-api-access-bblrm\") pod \"placement-9966-account-create-update-v784d\" (UID: \"71b893f8-fc1b-4dba-b63a-3c759969ae3c\") " pod="openstack/placement-9966-account-create-update-v784d" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.589482 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71b893f8-fc1b-4dba-b63a-3c759969ae3c-operator-scripts\") pod \"placement-9966-account-create-update-v784d\" (UID: \"71b893f8-fc1b-4dba-b63a-3c759969ae3c\") " pod="openstack/placement-9966-account-create-update-v784d" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.589530 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgqw5\" (UniqueName: \"kubernetes.io/projected/75dabc26-0258-4ef3-b0c8-04231f2fa5c5-kube-api-access-lgqw5\") pod \"root-account-create-update-d9wv4\" (UID: \"75dabc26-0258-4ef3-b0c8-04231f2fa5c5\") " pod="openstack/root-account-create-update-d9wv4" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.589565 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4r2w\" (UniqueName: \"kubernetes.io/projected/a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21-kube-api-access-w4r2w\") pod \"nova-api-badb-account-create-update-m5cpb\" (UID: \"a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21\") " pod="openstack/nova-api-badb-account-create-update-m5cpb" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.589589 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21-operator-scripts\") pod \"nova-api-badb-account-create-update-m5cpb\" (UID: \"a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21\") " pod="openstack/nova-api-badb-account-create-update-m5cpb" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.589608 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75dabc26-0258-4ef3-b0c8-04231f2fa5c5-operator-scripts\") pod \"root-account-create-update-d9wv4\" (UID: \"75dabc26-0258-4ef3-b0c8-04231f2fa5c5\") " pod="openstack/root-account-create-update-d9wv4" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.590472 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75dabc26-0258-4ef3-b0c8-04231f2fa5c5-operator-scripts\") pod \"root-account-create-update-d9wv4\" (UID: \"75dabc26-0258-4ef3-b0c8-04231f2fa5c5\") " pod="openstack/root-account-create-update-d9wv4" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.620692 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-bbcc-account-create-update-rc6qw"] Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.621962 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bbcc-account-create-update-rc6qw" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.631636 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.640144 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgqw5\" (UniqueName: \"kubernetes.io/projected/75dabc26-0258-4ef3-b0c8-04231f2fa5c5-kube-api-access-lgqw5\") pod \"root-account-create-update-d9wv4\" (UID: \"75dabc26-0258-4ef3-b0c8-04231f2fa5c5\") " pod="openstack/root-account-create-update-d9wv4" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.656672 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32" path="/var/lib/kubelet/pods/2568e8f5-ef00-4eb0-aec5-ee93e7bdeb32/volumes" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.662802 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eccdd187-3938-4331-82f9-b5dac2e9c1c1" path="/var/lib/kubelet/pods/eccdd187-3938-4331-82f9-b5dac2e9c1c1/volumes" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.663538 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-5xwl7"] Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.679832 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d9wv4" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.705541 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4r2w\" (UniqueName: \"kubernetes.io/projected/a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21-kube-api-access-w4r2w\") pod \"nova-api-badb-account-create-update-m5cpb\" (UID: \"a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21\") " pod="openstack/nova-api-badb-account-create-update-m5cpb" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.718274 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21-operator-scripts\") pod \"nova-api-badb-account-create-update-m5cpb\" (UID: \"a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21\") " pod="openstack/nova-api-badb-account-create-update-m5cpb" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.718695 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efe6b6e0-5d7c-4207-b5fc-44f510e301b7-operator-scripts\") pod \"nova-cell0-bbcc-account-create-update-rc6qw\" (UID: \"efe6b6e0-5d7c-4207-b5fc-44f510e301b7\") " pod="openstack/nova-cell0-bbcc-account-create-update-rc6qw" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.719050 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bblrm\" (UniqueName: \"kubernetes.io/projected/71b893f8-fc1b-4dba-b63a-3c759969ae3c-kube-api-access-bblrm\") pod \"placement-9966-account-create-update-v784d\" (UID: \"71b893f8-fc1b-4dba-b63a-3c759969ae3c\") " pod="openstack/placement-9966-account-create-update-v784d" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.719111 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71b893f8-fc1b-4dba-b63a-3c759969ae3c-operator-scripts\") pod \"placement-9966-account-create-update-v784d\" (UID: \"71b893f8-fc1b-4dba-b63a-3c759969ae3c\") " pod="openstack/placement-9966-account-create-update-v784d" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.719193 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll2xj\" (UniqueName: \"kubernetes.io/projected/efe6b6e0-5d7c-4207-b5fc-44f510e301b7-kube-api-access-ll2xj\") pod \"nova-cell0-bbcc-account-create-update-rc6qw\" (UID: \"efe6b6e0-5d7c-4207-b5fc-44f510e301b7\") " pod="openstack/nova-cell0-bbcc-account-create-update-rc6qw" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.719564 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21-operator-scripts\") pod \"nova-api-badb-account-create-update-m5cpb\" (UID: \"a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21\") " pod="openstack/nova-api-badb-account-create-update-m5cpb" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.720202 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71b893f8-fc1b-4dba-b63a-3c759969ae3c-operator-scripts\") pod \"placement-9966-account-create-update-v784d\" (UID: \"71b893f8-fc1b-4dba-b63a-3c759969ae3c\") " pod="openstack/placement-9966-account-create-update-v784d" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.731019 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-6bhp6"] Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.737435 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-6bhp6" podUID="aec1d5c5-b41c-4d8b-9810-04a25a18c1b1" containerName="openstack-network-exporter" containerID="cri-o://52127a29349e4ff336f88e8c487c475618ad44fe9d5adba506f6a7274fdd5580" gracePeriod=30 Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.759176 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.759622 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="d923459f-90f4-4399-80a0-4e22daa1eadf" containerName="ovn-northd" containerID="cri-o://63d0d07ec18342868dff13620687889bed168fed03e7ed3e8bab9795de7f6b30" gracePeriod=30 Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.759820 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="d923459f-90f4-4399-80a0-4e22daa1eadf" containerName="openstack-network-exporter" containerID="cri-o://3b617bffdd5cc1d450fde9acb69cf5146fe9369c179986b8ac76e6fe9affd265" gracePeriod=30 Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.779871 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4r2w\" (UniqueName: \"kubernetes.io/projected/a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21-kube-api-access-w4r2w\") pod \"nova-api-badb-account-create-update-m5cpb\" (UID: \"a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21\") " pod="openstack/nova-api-badb-account-create-update-m5cpb" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.796317 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bblrm\" (UniqueName: \"kubernetes.io/projected/71b893f8-fc1b-4dba-b63a-3c759969ae3c-kube-api-access-bblrm\") pod \"placement-9966-account-create-update-v784d\" (UID: \"71b893f8-fc1b-4dba-b63a-3c759969ae3c\") " pod="openstack/placement-9966-account-create-update-v784d" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.822749 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efe6b6e0-5d7c-4207-b5fc-44f510e301b7-operator-scripts\") pod \"nova-cell0-bbcc-account-create-update-rc6qw\" (UID: \"efe6b6e0-5d7c-4207-b5fc-44f510e301b7\") " pod="openstack/nova-cell0-bbcc-account-create-update-rc6qw" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.822832 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll2xj\" (UniqueName: \"kubernetes.io/projected/efe6b6e0-5d7c-4207-b5fc-44f510e301b7-kube-api-access-ll2xj\") pod \"nova-cell0-bbcc-account-create-update-rc6qw\" (UID: \"efe6b6e0-5d7c-4207-b5fc-44f510e301b7\") " pod="openstack/nova-cell0-bbcc-account-create-update-rc6qw" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.823801 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efe6b6e0-5d7c-4207-b5fc-44f510e301b7-operator-scripts\") pod \"nova-cell0-bbcc-account-create-update-rc6qw\" (UID: \"efe6b6e0-5d7c-4207-b5fc-44f510e301b7\") " pod="openstack/nova-cell0-bbcc-account-create-update-rc6qw" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.830807 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-bbcc-account-create-update-rc6qw"] Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.869093 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-n5d2t"] Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.869724 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll2xj\" (UniqueName: \"kubernetes.io/projected/efe6b6e0-5d7c-4207-b5fc-44f510e301b7-kube-api-access-ll2xj\") pod \"nova-cell0-bbcc-account-create-update-rc6qw\" (UID: \"efe6b6e0-5d7c-4207-b5fc-44f510e301b7\") " pod="openstack/nova-cell0-bbcc-account-create-update-rc6qw" Feb 27 19:20:31 crc kubenswrapper[4981]: I0227 19:20:31.903918 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-badb-account-create-update-m5cpb" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.009488 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-a620-account-create-update-dxhm7"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.010634 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a620-account-create-update-dxhm7" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.017451 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.024554 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bbcc-account-create-update-rc6qw" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.029906 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5eb78cff-4f39-4b26-8cf4-c1c8f64730ae-operator-scripts\") pod \"nova-cell1-a620-account-create-update-dxhm7\" (UID: \"5eb78cff-4f39-4b26-8cf4-c1c8f64730ae\") " pod="openstack/nova-cell1-a620-account-create-update-dxhm7" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.029972 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj48j\" (UniqueName: \"kubernetes.io/projected/5eb78cff-4f39-4b26-8cf4-c1c8f64730ae-kube-api-access-nj48j\") pod \"nova-cell1-a620-account-create-update-dxhm7\" (UID: \"5eb78cff-4f39-4b26-8cf4-c1c8f64730ae\") " pod="openstack/nova-cell1-a620-account-create-update-dxhm7" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.042291 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-a620-account-create-update-dxhm7"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.074184 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.086105 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9966-account-create-update-v784d" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.141701 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-6bhp6_aec1d5c5-b41c-4d8b-9810-04a25a18c1b1/openstack-network-exporter/0.log" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.142322 4981 generic.go:334] "Generic (PLEG): container finished" podID="aec1d5c5-b41c-4d8b-9810-04a25a18c1b1" containerID="52127a29349e4ff336f88e8c487c475618ad44fe9d5adba506f6a7274fdd5580" exitCode=2 Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.142524 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6bhp6" event={"ID":"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1","Type":"ContainerDied","Data":"52127a29349e4ff336f88e8c487c475618ad44fe9d5adba506f6a7274fdd5580"} Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.151958 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-a620-account-create-update-4fnbx"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.177202 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5eb78cff-4f39-4b26-8cf4-c1c8f64730ae-operator-scripts\") pod \"nova-cell1-a620-account-create-update-dxhm7\" (UID: \"5eb78cff-4f39-4b26-8cf4-c1c8f64730ae\") " pod="openstack/nova-cell1-a620-account-create-update-dxhm7" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.177276 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj48j\" (UniqueName: \"kubernetes.io/projected/5eb78cff-4f39-4b26-8cf4-c1c8f64730ae-kube-api-access-nj48j\") pod \"nova-cell1-a620-account-create-update-dxhm7\" (UID: \"5eb78cff-4f39-4b26-8cf4-c1c8f64730ae\") " pod="openstack/nova-cell1-a620-account-create-update-dxhm7" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.186710 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5eb78cff-4f39-4b26-8cf4-c1c8f64730ae-operator-scripts\") pod \"nova-cell1-a620-account-create-update-dxhm7\" (UID: \"5eb78cff-4f39-4b26-8cf4-c1c8f64730ae\") " pod="openstack/nova-cell1-a620-account-create-update-dxhm7" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.200660 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d923459f-90f4-4399-80a0-4e22daa1eadf/ovn-northd/0.log" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.200725 4981 generic.go:334] "Generic (PLEG): container finished" podID="d923459f-90f4-4399-80a0-4e22daa1eadf" containerID="3b617bffdd5cc1d450fde9acb69cf5146fe9369c179986b8ac76e6fe9affd265" exitCode=2 Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.200747 4981 generic.go:334] "Generic (PLEG): container finished" podID="d923459f-90f4-4399-80a0-4e22daa1eadf" containerID="63d0d07ec18342868dff13620687889bed168fed03e7ed3e8bab9795de7f6b30" exitCode=143 Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.200775 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d923459f-90f4-4399-80a0-4e22daa1eadf","Type":"ContainerDied","Data":"3b617bffdd5cc1d450fde9acb69cf5146fe9369c179986b8ac76e6fe9affd265"} Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.200807 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d923459f-90f4-4399-80a0-4e22daa1eadf","Type":"ContainerDied","Data":"63d0d07ec18342868dff13620687889bed168fed03e7ed3e8bab9795de7f6b30"} Feb 27 19:20:32 crc kubenswrapper[4981]: E0227 19:20:32.209409 4981 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 27 19:20:32 crc kubenswrapper[4981]: E0227 19:20:32.209637 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-config-data podName:991e04a2-e14a-4987-a7d8-b7f5db5cb8e3 nodeName:}" failed. No retries permitted until 2026-02-27 19:20:32.709599059 +0000 UTC m=+2132.188380219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-config-data") pod "rabbitmq-server-0" (UID: "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3") : configmap "rabbitmq-config-data" not found Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.232818 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj48j\" (UniqueName: \"kubernetes.io/projected/5eb78cff-4f39-4b26-8cf4-c1c8f64730ae-kube-api-access-nj48j\") pod \"nova-cell1-a620-account-create-update-dxhm7\" (UID: \"5eb78cff-4f39-4b26-8cf4-c1c8f64730ae\") " pod="openstack/nova-cell1-a620-account-create-update-dxhm7" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.233202 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-9rclp"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.249678 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-9rclp"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.346197 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-a620-account-create-update-4fnbx"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.397109 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7h875"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.406628 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a620-account-create-update-dxhm7" Feb 27 19:20:32 crc kubenswrapper[4981]: E0227 19:20:32.418255 4981 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-n5d2t" message=< Feb 27 19:20:32 crc kubenswrapper[4981]: Exiting ovn-controller (1) [ OK ] Feb 27 19:20:32 crc kubenswrapper[4981]: > Feb 27 19:20:32 crc kubenswrapper[4981]: E0227 19:20:32.418293 4981 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-n5d2t" podUID="214d65cb-9030-4093-853c-c1485fc1a30a" containerName="ovn-controller" containerID="cri-o://1d7e957ec0b2bda077dee170917b2cb8f7028331f4696bcedbf1e0135c091783" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.418326 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-n5d2t" podUID="214d65cb-9030-4093-853c-c1485fc1a30a" containerName="ovn-controller" containerID="cri-o://1d7e957ec0b2bda077dee170917b2cb8f7028331f4696bcedbf1e0135c091783" gracePeriod=30 Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.422563 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7h875"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.447266 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-421d-account-create-update-bbmzr"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.448546 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-421d-account-create-update-bbmzr" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.453969 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.495288 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99264d6c-f9e9-4b89-882f-f9024381b3e4-operator-scripts\") pod \"barbican-421d-account-create-update-bbmzr\" (UID: \"99264d6c-f9e9-4b89-882f-f9024381b3e4\") " pod="openstack/barbican-421d-account-create-update-bbmzr" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.495336 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-421d-account-create-update-bbmzr"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.495394 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsldz\" (UniqueName: \"kubernetes.io/projected/99264d6c-f9e9-4b89-882f-f9024381b3e4-kube-api-access-qsldz\") pod \"barbican-421d-account-create-update-bbmzr\" (UID: \"99264d6c-f9e9-4b89-882f-f9024381b3e4\") " pod="openstack/barbican-421d-account-create-update-bbmzr" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.513662 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-421d-account-create-update-pkhb5"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.525560 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-421d-account-create-update-pkhb5"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.555498 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8k42j"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.568078 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8k42j"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.584310 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-tgsfm"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.601616 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsldz\" (UniqueName: \"kubernetes.io/projected/99264d6c-f9e9-4b89-882f-f9024381b3e4-kube-api-access-qsldz\") pod \"barbican-421d-account-create-update-bbmzr\" (UID: \"99264d6c-f9e9-4b89-882f-f9024381b3e4\") " pod="openstack/barbican-421d-account-create-update-bbmzr" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.601767 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99264d6c-f9e9-4b89-882f-f9024381b3e4-operator-scripts\") pod \"barbican-421d-account-create-update-bbmzr\" (UID: \"99264d6c-f9e9-4b89-882f-f9024381b3e4\") " pod="openstack/barbican-421d-account-create-update-bbmzr" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.602742 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99264d6c-f9e9-4b89-882f-f9024381b3e4-operator-scripts\") pod \"barbican-421d-account-create-update-bbmzr\" (UID: \"99264d6c-f9e9-4b89-882f-f9024381b3e4\") " pod="openstack/barbican-421d-account-create-update-bbmzr" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.614116 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-tgsfm"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.628607 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-5pw8g"] Feb 27 19:20:32 crc kubenswrapper[4981]: E0227 19:20:32.644979 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:20:32 crc kubenswrapper[4981]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 27 19:20:32 crc kubenswrapper[4981]: Feb 27 19:20:32 crc kubenswrapper[4981]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 27 19:20:32 crc kubenswrapper[4981]: Feb 27 19:20:32 crc kubenswrapper[4981]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 27 19:20:32 crc kubenswrapper[4981]: Feb 27 19:20:32 crc kubenswrapper[4981]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 27 19:20:32 crc kubenswrapper[4981]: Feb 27 19:20:32 crc kubenswrapper[4981]: if [ -n "glance" ]; then Feb 27 19:20:32 crc kubenswrapper[4981]: GRANT_DATABASE="glance" Feb 27 19:20:32 crc kubenswrapper[4981]: else Feb 27 19:20:32 crc kubenswrapper[4981]: GRANT_DATABASE="*" Feb 27 19:20:32 crc kubenswrapper[4981]: fi Feb 27 19:20:32 crc kubenswrapper[4981]: Feb 27 19:20:32 crc kubenswrapper[4981]: # going for maximum compatibility here: Feb 27 19:20:32 crc kubenswrapper[4981]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 27 19:20:32 crc kubenswrapper[4981]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 27 19:20:32 crc kubenswrapper[4981]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 27 19:20:32 crc kubenswrapper[4981]: # support updates Feb 27 19:20:32 crc kubenswrapper[4981]: Feb 27 19:20:32 crc kubenswrapper[4981]: $MYSQL_CMD < logger="UnhandledError" Feb 27 19:20:32 crc kubenswrapper[4981]: E0227 19:20:32.648452 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-e72a-account-create-update-fdvkk" podUID="5bf5661d-549c-4591-8f93-02bc09f63f29" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.651391 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-5pw8g"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.652792 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsldz\" (UniqueName: \"kubernetes.io/projected/99264d6c-f9e9-4b89-882f-f9024381b3e4-kube-api-access-qsldz\") pod \"barbican-421d-account-create-update-bbmzr\" (UID: \"99264d6c-f9e9-4b89-882f-f9024381b3e4\") " pod="openstack/barbican-421d-account-create-update-bbmzr" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.671748 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-k82mt"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.713004 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.713610 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="d57cb309-6812-4de2-a172-8d0896a7d864" containerName="openstack-network-exporter" containerID="cri-o://ca540b7d5b9796b148a47711ca823c44cdded0771435a1f7d0fe11fc82e0c7d3" gracePeriod=300 Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.755158 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-k82mt"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.795796 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e72a-account-create-update-fdvkk"] Feb 27 19:20:32 crc kubenswrapper[4981]: E0227 19:20:32.809238 4981 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 27 19:20:32 crc kubenswrapper[4981]: E0227 19:20:32.809315 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-config-data podName:991e04a2-e14a-4987-a7d8-b7f5db5cb8e3 nodeName:}" failed. No retries permitted until 2026-02-27 19:20:33.809296264 +0000 UTC m=+2133.288077424 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-config-data") pod "rabbitmq-server-0" (UID: "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3") : configmap "rabbitmq-config-data" not found Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.839992 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="d57cb309-6812-4de2-a172-8d0896a7d864" containerName="ovsdbserver-nb" containerID="cri-o://3a07e6641f50563b3e75f3c14deefbb8d5806ace0ae40d2c05789e3e1dfa6b3c" gracePeriod=300 Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.878018 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.899178 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.901284 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="e48390e6-5fc4-4c7e-983d-8338bf663e75" containerName="openstack-network-exporter" containerID="cri-o://dd5a437e89f1984f0479ec8286bb8061f357082ffc23b35bd6f382a0898da54c" gracePeriod=300 Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.941670 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-68687"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.953268 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-421d-account-create-update-bbmzr" Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.957425 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-68687"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.978766 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5b6bf89d9-5xrv6"] Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.979002 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5b6bf89d9-5xrv6" podUID="e691b557-a141-44b1-a2c7-4ba36af55a15" containerName="neutron-api" containerID="cri-o://05eafe4f692fe809c80310522b3ee1e9042aee13aa572f6f81438b46b0174a5c" gracePeriod=30 Feb 27 19:20:32 crc kubenswrapper[4981]: I0227 19:20:32.979392 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5b6bf89d9-5xrv6" podUID="e691b557-a141-44b1-a2c7-4ba36af55a15" containerName="neutron-httpd" containerID="cri-o://c280c1755db22cca5a1d60b0780818610aff15154fcb422c9167f6737e22b6d6" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: E0227 19:20:33.014213 4981 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 27 19:20:33 crc kubenswrapper[4981]: E0227 19:20:33.014271 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-config-data podName:f928877c-eaff-4ab4-ae3b-ba6ed721642c nodeName:}" failed. No retries permitted until 2026-02-27 19:20:33.514256772 +0000 UTC m=+2132.993037932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-config-data") pod "rabbitmq-cell1-server-0" (UID: "f928877c-eaff-4ab4-ae3b-ba6ed721642c") : configmap "rabbitmq-cell1-config-data" not found Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.017038 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-6bhp6_aec1d5c5-b41c-4d8b-9810-04a25a18c1b1/openstack-network-exporter/0.log" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.017116 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.063222 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.063455 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285" containerName="cinder-scheduler" containerID="cri-o://5504fe172b1dab98409d133dc9ed246af0545979ab9f8635024b3da89221f235" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.063590 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285" containerName="probe" containerID="cri-o://ca40e5e300e579035592e5f61ba56d1c13272badf8162c9e4acf2f73e36e387f" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.131957 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-metrics-certs-tls-certs\") pod \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.135369 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-combined-ca-bundle\") pod \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.135497 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-ovs-rundir\") pod \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.135647 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-ovn-rundir\") pod \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.135703 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-config\") pod \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.135748 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqgtz\" (UniqueName: \"kubernetes.io/projected/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-kube-api-access-lqgtz\") pod \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\" (UID: \"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1\") " Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.140160 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "aec1d5c5-b41c-4d8b-9810-04a25a18c1b1" (UID: "aec1d5c5-b41c-4d8b-9810-04a25a18c1b1"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.140761 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "aec1d5c5-b41c-4d8b-9810-04a25a18c1b1" (UID: "aec1d5c5-b41c-4d8b-9810-04a25a18c1b1"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.155403 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-config" (OuterVolumeSpecName: "config") pod "aec1d5c5-b41c-4d8b-9810-04a25a18c1b1" (UID: "aec1d5c5-b41c-4d8b-9810-04a25a18c1b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:33 crc kubenswrapper[4981]: E0227 19:20:33.167736 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1d7e957ec0b2bda077dee170917b2cb8f7028331f4696bcedbf1e0135c091783 is running failed: container process not found" containerID="1d7e957ec0b2bda077dee170917b2cb8f7028331f4696bcedbf1e0135c091783" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Feb 27 19:20:33 crc kubenswrapper[4981]: E0227 19:20:33.170634 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1d7e957ec0b2bda077dee170917b2cb8f7028331f4696bcedbf1e0135c091783 is running failed: container process not found" containerID="1d7e957ec0b2bda077dee170917b2cb8f7028331f4696bcedbf1e0135c091783" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Feb 27 19:20:33 crc kubenswrapper[4981]: E0227 19:20:33.173659 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1d7e957ec0b2bda077dee170917b2cb8f7028331f4696bcedbf1e0135c091783 is running failed: container process not found" containerID="1d7e957ec0b2bda077dee170917b2cb8f7028331f4696bcedbf1e0135c091783" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Feb 27 19:20:33 crc kubenswrapper[4981]: E0227 19:20:33.173732 4981 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1d7e957ec0b2bda077dee170917b2cb8f7028331f4696bcedbf1e0135c091783 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-n5d2t" podUID="214d65cb-9030-4093-853c-c1485fc1a30a" containerName="ovn-controller" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.195354 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.196384 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-kube-api-access-lqgtz" (OuterVolumeSpecName: "kube-api-access-lqgtz") pod "aec1d5c5-b41c-4d8b-9810-04a25a18c1b1" (UID: "aec1d5c5-b41c-4d8b-9810-04a25a18c1b1"). InnerVolumeSpecName "kube-api-access-lqgtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.197035 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="48fdca2c-4513-4ee6-ad1b-bf69891f5580" containerName="cinder-api" containerID="cri-o://3a24b6b7046d00eaee078203e6b423a21700f864b03fcfa22beb510090d24c3b" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.197381 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="48fdca2c-4513-4ee6-ad1b-bf69891f5580" containerName="cinder-api-log" containerID="cri-o://1cbf1ce682a3eeca1567599c6ff529e2db2d20a4965a962ce5c563a3e4dd58f1" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.241818 4981 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-ovs-rundir\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.241860 4981 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-ovn-rundir\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.241872 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.241884 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqgtz\" (UniqueName: \"kubernetes.io/projected/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-kube-api-access-lqgtz\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.248772 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aec1d5c5-b41c-4d8b-9810-04a25a18c1b1" (UID: "aec1d5c5-b41c-4d8b-9810-04a25a18c1b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.258159 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-r2zw4"] Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.258493 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" podUID="e719b057-15c7-4204-9cbc-665f6653011f" containerName="dnsmasq-dns" containerID="cri-o://98584a232e3fef55da5240ff567aead3a2ca1595c80c0f7568768a774b5bbf94" gracePeriod=10 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.283635 4981 generic.go:334] "Generic (PLEG): container finished" podID="214d65cb-9030-4093-853c-c1485fc1a30a" containerID="1d7e957ec0b2bda077dee170917b2cb8f7028331f4696bcedbf1e0135c091783" exitCode=0 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.283646 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n5d2t" event={"ID":"214d65cb-9030-4093-853c-c1485fc1a30a","Type":"ContainerDied","Data":"1d7e957ec0b2bda077dee170917b2cb8f7028331f4696bcedbf1e0135c091783"} Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.301675 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e72a-account-create-update-fdvkk" event={"ID":"5bf5661d-549c-4591-8f93-02bc09f63f29","Type":"ContainerStarted","Data":"12f3d04b78833a40a4f389a9efc10b665ca7c7b73ba1b44448bd3791c369fe74"} Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.314000 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-j6c5h"] Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.360972 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.545820 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-5xwl7" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovs-vswitchd" probeResult="failure" output=< Feb 27 19:20:33 crc kubenswrapper[4981]: cat: /var/run/openvswitch/ovs-vswitchd.pid: No such file or directory Feb 27 19:20:33 crc kubenswrapper[4981]: ERROR - Failed to get pid for ovs-vswitchd, exit status: 0 Feb 27 19:20:33 crc kubenswrapper[4981]: > Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.546508 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="e48390e6-5fc4-4c7e-983d-8338bf663e75" containerName="ovsdbserver-sb" containerID="cri-o://4fed5c77f575f747ac150d5541be3f78f7462e24e36ffd8348183eb7cc147164" gracePeriod=300 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.546664 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-5xwl7" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovs-vswitchd" containerID="cri-o://ee6fb96a4b332552632dd9dc8737353181f1aa6fa1493b16b542b6e241c02773" gracePeriod=29 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.547283 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6f8d597b78-f58nv"] Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.547607 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6f8d597b78-f58nv" podUID="6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2" containerName="placement-log" containerID="cri-o://da47666533c186d6e31e8632cdd467e851243fc49eff7f7fcac48865f970ee5b" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.547712 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6f8d597b78-f58nv" podUID="6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2" containerName="placement-api" containerID="cri-o://dae852a53f7febec558df780ac57acd7d91cce2fba1b3b86d956c36653347faa" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.560075 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d57cb309-6812-4de2-a172-8d0896a7d864/ovsdbserver-nb/0.log" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.561034 4981 generic.go:334] "Generic (PLEG): container finished" podID="d57cb309-6812-4de2-a172-8d0896a7d864" containerID="3a07e6641f50563b3e75f3c14deefbb8d5806ace0ae40d2c05789e3e1dfa6b3c" exitCode=143 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.561082 4981 generic.go:334] "Generic (PLEG): container finished" podID="d57cb309-6812-4de2-a172-8d0896a7d864" containerID="ca540b7d5b9796b148a47711ca823c44cdded0771435a1f7d0fe11fc82e0c7d3" exitCode=2 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.561177 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d57cb309-6812-4de2-a172-8d0896a7d864","Type":"ContainerDied","Data":"3a07e6641f50563b3e75f3c14deefbb8d5806ace0ae40d2c05789e3e1dfa6b3c"} Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.561212 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d57cb309-6812-4de2-a172-8d0896a7d864","Type":"ContainerDied","Data":"ca540b7d5b9796b148a47711ca823c44cdded0771435a1f7d0fe11fc82e0c7d3"} Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.564715 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d923459f-90f4-4399-80a0-4e22daa1eadf/ovn-northd/0.log" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.564812 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.572322 4981 generic.go:334] "Generic (PLEG): container finished" podID="e48390e6-5fc4-4c7e-983d-8338bf663e75" containerID="dd5a437e89f1984f0479ec8286bb8061f357082ffc23b35bd6f382a0898da54c" exitCode=2 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.572649 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e48390e6-5fc4-4c7e-983d-8338bf663e75","Type":"ContainerDied","Data":"dd5a437e89f1984f0479ec8286bb8061f357082ffc23b35bd6f382a0898da54c"} Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.600106 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-j6c5h"] Feb 27 19:20:33 crc kubenswrapper[4981]: E0227 19:20:33.601592 4981 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 27 19:20:33 crc kubenswrapper[4981]: E0227 19:20:33.601678 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-config-data podName:f928877c-eaff-4ab4-ae3b-ba6ed721642c nodeName:}" failed. No retries permitted until 2026-02-27 19:20:34.601656899 +0000 UTC m=+2134.080438059 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-config-data") pod "rabbitmq-cell1-server-0" (UID: "f928877c-eaff-4ab4-ae3b-ba6ed721642c") : configmap "rabbitmq-cell1-config-data" not found Feb 27 19:20:33 crc kubenswrapper[4981]: E0227 19:20:33.602987 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:20:33 crc kubenswrapper[4981]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 27 19:20:33 crc kubenswrapper[4981]: Feb 27 19:20:33 crc kubenswrapper[4981]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 27 19:20:33 crc kubenswrapper[4981]: Feb 27 19:20:33 crc kubenswrapper[4981]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 27 19:20:33 crc kubenswrapper[4981]: Feb 27 19:20:33 crc kubenswrapper[4981]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 27 19:20:33 crc kubenswrapper[4981]: Feb 27 19:20:33 crc kubenswrapper[4981]: if [ -n "glance" ]; then Feb 27 19:20:33 crc kubenswrapper[4981]: GRANT_DATABASE="glance" Feb 27 19:20:33 crc kubenswrapper[4981]: else Feb 27 19:20:33 crc kubenswrapper[4981]: GRANT_DATABASE="*" Feb 27 19:20:33 crc kubenswrapper[4981]: fi Feb 27 19:20:33 crc kubenswrapper[4981]: Feb 27 19:20:33 crc kubenswrapper[4981]: # going for maximum compatibility here: Feb 27 19:20:33 crc kubenswrapper[4981]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 27 19:20:33 crc kubenswrapper[4981]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 27 19:20:33 crc kubenswrapper[4981]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 27 19:20:33 crc kubenswrapper[4981]: # support updates Feb 27 19:20:33 crc kubenswrapper[4981]: Feb 27 19:20:33 crc kubenswrapper[4981]: $MYSQL_CMD < logger="UnhandledError" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.603435 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-6bhp6_aec1d5c5-b41c-4d8b-9810-04a25a18c1b1/openstack-network-exporter/0.log" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.606222 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6bhp6" event={"ID":"aec1d5c5-b41c-4d8b-9810-04a25a18c1b1","Type":"ContainerDied","Data":"96d32469e5b65e47e87c0bdb3399131cc17cdd50b6cf2ac97e29d611fd8c9f34"} Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.606352 4981 scope.go:117] "RemoveContainer" containerID="52127a29349e4ff336f88e8c487c475618ad44fe9d5adba506f6a7274fdd5580" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.606710 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6bhp6" Feb 27 19:20:33 crc kubenswrapper[4981]: E0227 19:20:33.607847 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-e72a-account-create-update-fdvkk" podUID="5bf5661d-549c-4591-8f93-02bc09f63f29" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.623498 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "aec1d5c5-b41c-4d8b-9810-04a25a18c1b1" (UID: "aec1d5c5-b41c-4d8b-9810-04a25a18c1b1"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.638536 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.639031 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="container-server" containerID="cri-o://e0eca54f11d429374a0eee69171647db11c1192aa00c288bd9e67f3a6f0c0246" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.639272 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="account-reaper" containerID="cri-o://77798546322cfdb767abb826f6d72d37c7c97fa182b47831196724af9d277123" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.639331 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="account-auditor" containerID="cri-o://93c87ecb8d8bad33d71e9078051a7748cc757e16bcf80e48a23944e5c1b69077" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.639365 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="account-replicator" containerID="cri-o://340a6d7be188f87cef0feaea5f958cc9043c49411edd955b9683aab0230bb9ce" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.639406 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="account-server" containerID="cri-o://24a7799c5cd63e35072f81b37d3932a76fad3192143aeadfc8474ce31dd7dd07" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.639439 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="swift-recon-cron" containerID="cri-o://f0a4445a2b6fa3cf8145c61803d537465f991247ac86d8c79a5cbc0036d344fa" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.639472 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="rsync" containerID="cri-o://ee08f1be3428c964e3a5c4747f6aa00160451c72e3665c691697f802f5a0bff8" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.639501 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="object-expirer" containerID="cri-o://6429fdd1fd1cd3788a688757b026c7af8c055f3fb7254d239ca6600f69c3448f" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.639532 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="object-updater" containerID="cri-o://7dfea33b75db73391310211c5e0efd16be4a0053864fa6f4abfd9bc77f7118f0" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.639562 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="object-auditor" containerID="cri-o://80c7a986c413669964ba2fa274f8997a3315fbfd2c8ff1d23dbd74c88b68e595" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.639594 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="object-replicator" containerID="cri-o://bd23d8482fb237875074c0a92ce77c62ec21a9f35c2014202018bbdef7e20697" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.639622 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="object-server" containerID="cri-o://65f6f3c00e9667ac2dc2eaf62c9691a794f16c6916c044f6252dfe67b11c9cec" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.639650 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="container-updater" containerID="cri-o://38569ba465d1fbcc944576c382365600b3972a77a5d42e3a33726b72c23be51a" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.639681 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="container-auditor" containerID="cri-o://bd2133012f7ec8d5b23febc4eae98775150d6779cece3953959dd0ebaafac076" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.639721 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="container-replicator" containerID="cri-o://9a1a2e131f5761d079c69185c95e394bd577eda00ea0354161ac5ab992f9e3d0" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.687110 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="094a0674-7bf9-4e18-9e70-8efed0ae3ac2" path="/var/lib/kubelet/pods/094a0674-7bf9-4e18-9e70-8efed0ae3ac2/volumes" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.687986 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25ef4760-0e11-422c-b084-afe3d47fbdac" path="/var/lib/kubelet/pods/25ef4760-0e11-422c-b084-afe3d47fbdac/volumes" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.691305 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="433a9f91-dd8c-4e01-9133-fe5e143bc696" path="/var/lib/kubelet/pods/433a9f91-dd8c-4e01-9133-fe5e143bc696/volumes" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.692183 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81024b85-8686-478d-b17e-7c599561675b" path="/var/lib/kubelet/pods/81024b85-8686-478d-b17e-7c599561675b/volumes" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.697420 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c06b80b-18d6-4fef-a1ce-2d513e9b58e6" path="/var/lib/kubelet/pods/8c06b80b-18d6-4fef-a1ce-2d513e9b58e6/volumes" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.698426 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c44ff793-41da-4b74-b057-f4b3596eeb9d" path="/var/lib/kubelet/pods/c44ff793-41da-4b74-b057-f4b3596eeb9d/volumes" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.699043 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4986e8e-fefc-4491-ba3f-9a85cf49472b" path="/var/lib/kubelet/pods/c4986e8e-fefc-4491-ba3f-9a85cf49472b/volumes" Feb 27 19:20:33 crc kubenswrapper[4981]: E0227 19:20:33.701517 4981 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Feb 27 19:20:33 crc kubenswrapper[4981]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 27 19:20:33 crc kubenswrapper[4981]: + source /usr/local/bin/container-scripts/functions Feb 27 19:20:33 crc kubenswrapper[4981]: ++ OVNBridge=br-int Feb 27 19:20:33 crc kubenswrapper[4981]: ++ OVNRemote=tcp:localhost:6642 Feb 27 19:20:33 crc kubenswrapper[4981]: ++ OVNEncapType=geneve Feb 27 19:20:33 crc kubenswrapper[4981]: ++ OVNAvailabilityZones= Feb 27 19:20:33 crc kubenswrapper[4981]: ++ EnableChassisAsGateway=true Feb 27 19:20:33 crc kubenswrapper[4981]: ++ PhysicalNetworks= Feb 27 19:20:33 crc kubenswrapper[4981]: ++ OVNHostName= Feb 27 19:20:33 crc kubenswrapper[4981]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 27 19:20:33 crc kubenswrapper[4981]: ++ ovs_dir=/var/lib/openvswitch Feb 27 19:20:33 crc kubenswrapper[4981]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 27 19:20:33 crc kubenswrapper[4981]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 27 19:20:33 crc kubenswrapper[4981]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 27 19:20:33 crc kubenswrapper[4981]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 27 19:20:33 crc kubenswrapper[4981]: + sleep 0.5 Feb 27 19:20:33 crc kubenswrapper[4981]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 27 19:20:33 crc kubenswrapper[4981]: + sleep 0.5 Feb 27 19:20:33 crc kubenswrapper[4981]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 27 19:20:33 crc kubenswrapper[4981]: + cleanup_ovsdb_server_semaphore Feb 27 19:20:33 crc kubenswrapper[4981]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 27 19:20:33 crc kubenswrapper[4981]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 27 19:20:33 crc kubenswrapper[4981]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-5xwl7" message=< Feb 27 19:20:33 crc kubenswrapper[4981]: Exiting ovsdb-server (5) [ OK ] Feb 27 19:20:33 crc kubenswrapper[4981]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 27 19:20:33 crc kubenswrapper[4981]: + source /usr/local/bin/container-scripts/functions Feb 27 19:20:33 crc kubenswrapper[4981]: ++ OVNBridge=br-int Feb 27 19:20:33 crc kubenswrapper[4981]: ++ OVNRemote=tcp:localhost:6642 Feb 27 19:20:33 crc kubenswrapper[4981]: ++ OVNEncapType=geneve Feb 27 19:20:33 crc kubenswrapper[4981]: ++ OVNAvailabilityZones= Feb 27 19:20:33 crc kubenswrapper[4981]: ++ EnableChassisAsGateway=true Feb 27 19:20:33 crc kubenswrapper[4981]: ++ PhysicalNetworks= Feb 27 19:20:33 crc kubenswrapper[4981]: ++ OVNHostName= Feb 27 19:20:33 crc kubenswrapper[4981]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 27 19:20:33 crc kubenswrapper[4981]: ++ ovs_dir=/var/lib/openvswitch Feb 27 19:20:33 crc kubenswrapper[4981]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 27 19:20:33 crc kubenswrapper[4981]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 27 19:20:33 crc kubenswrapper[4981]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 27 19:20:33 crc kubenswrapper[4981]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 27 19:20:33 crc kubenswrapper[4981]: + sleep 0.5 Feb 27 19:20:33 crc kubenswrapper[4981]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 27 19:20:33 crc kubenswrapper[4981]: + sleep 0.5 Feb 27 19:20:33 crc kubenswrapper[4981]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 27 19:20:33 crc kubenswrapper[4981]: + cleanup_ovsdb_server_semaphore Feb 27 19:20:33 crc kubenswrapper[4981]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 27 19:20:33 crc kubenswrapper[4981]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 27 19:20:33 crc kubenswrapper[4981]: > Feb 27 19:20:33 crc kubenswrapper[4981]: E0227 19:20:33.702645 4981 kuberuntime_container.go:691] "PreStop hook failed" err=< Feb 27 19:20:33 crc kubenswrapper[4981]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 27 19:20:33 crc kubenswrapper[4981]: + source /usr/local/bin/container-scripts/functions Feb 27 19:20:33 crc kubenswrapper[4981]: ++ OVNBridge=br-int Feb 27 19:20:33 crc kubenswrapper[4981]: ++ OVNRemote=tcp:localhost:6642 Feb 27 19:20:33 crc kubenswrapper[4981]: ++ OVNEncapType=geneve Feb 27 19:20:33 crc kubenswrapper[4981]: ++ OVNAvailabilityZones= Feb 27 19:20:33 crc kubenswrapper[4981]: ++ EnableChassisAsGateway=true Feb 27 19:20:33 crc kubenswrapper[4981]: ++ PhysicalNetworks= Feb 27 19:20:33 crc kubenswrapper[4981]: ++ OVNHostName= Feb 27 19:20:33 crc kubenswrapper[4981]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 27 19:20:33 crc kubenswrapper[4981]: ++ ovs_dir=/var/lib/openvswitch Feb 27 19:20:33 crc kubenswrapper[4981]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 27 19:20:33 crc kubenswrapper[4981]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 27 19:20:33 crc kubenswrapper[4981]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 27 19:20:33 crc kubenswrapper[4981]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 27 19:20:33 crc kubenswrapper[4981]: + sleep 0.5 Feb 27 19:20:33 crc kubenswrapper[4981]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 27 19:20:33 crc kubenswrapper[4981]: + sleep 0.5 Feb 27 19:20:33 crc kubenswrapper[4981]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 27 19:20:33 crc kubenswrapper[4981]: + cleanup_ovsdb_server_semaphore Feb 27 19:20:33 crc kubenswrapper[4981]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 27 19:20:33 crc kubenswrapper[4981]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 27 19:20:33 crc kubenswrapper[4981]: > pod="openstack/ovn-controller-ovs-5xwl7" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovsdb-server" containerID="cri-o://2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.703227 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-5xwl7" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovsdb-server" containerID="cri-o://2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17" gracePeriod=28 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.703835 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgxvs\" (UniqueName: \"kubernetes.io/projected/d923459f-90f4-4399-80a0-4e22daa1eadf-kube-api-access-xgxvs\") pod \"d923459f-90f4-4399-80a0-4e22daa1eadf\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.703883 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d923459f-90f4-4399-80a0-4e22daa1eadf-combined-ca-bundle\") pod \"d923459f-90f4-4399-80a0-4e22daa1eadf\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.703910 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d923459f-90f4-4399-80a0-4e22daa1eadf-scripts\") pod \"d923459f-90f4-4399-80a0-4e22daa1eadf\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.703969 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d923459f-90f4-4399-80a0-4e22daa1eadf-metrics-certs-tls-certs\") pod \"d923459f-90f4-4399-80a0-4e22daa1eadf\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.708260 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d923459f-90f4-4399-80a0-4e22daa1eadf-scripts" (OuterVolumeSpecName: "scripts") pod "d923459f-90f4-4399-80a0-4e22daa1eadf" (UID: "d923459f-90f4-4399-80a0-4e22daa1eadf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.708972 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d923459f-90f4-4399-80a0-4e22daa1eadf-ovn-northd-tls-certs\") pod \"d923459f-90f4-4399-80a0-4e22daa1eadf\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.709171 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d923459f-90f4-4399-80a0-4e22daa1eadf-config\") pod \"d923459f-90f4-4399-80a0-4e22daa1eadf\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.709203 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d923459f-90f4-4399-80a0-4e22daa1eadf-ovn-rundir\") pod \"d923459f-90f4-4399-80a0-4e22daa1eadf\" (UID: \"d923459f-90f4-4399-80a0-4e22daa1eadf\") " Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.710885 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d923459f-90f4-4399-80a0-4e22daa1eadf-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "d923459f-90f4-4399-80a0-4e22daa1eadf" (UID: "d923459f-90f4-4399-80a0-4e22daa1eadf"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.711715 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d923459f-90f4-4399-80a0-4e22daa1eadf-config" (OuterVolumeSpecName: "config") pod "d923459f-90f4-4399-80a0-4e22daa1eadf" (UID: "d923459f-90f4-4399-80a0-4e22daa1eadf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.712335 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eac1eaf7-6fea-4dae-b8f3-b81615d30ee0" path="/var/lib/kubelet/pods/eac1eaf7-6fea-4dae-b8f3-b81615d30ee0/volumes" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.712771 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" podUID="e719b057-15c7-4204-9cbc-665f6653011f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.218:5353: connect: connection refused" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.720816 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d923459f-90f4-4399-80a0-4e22daa1eadf-kube-api-access-xgxvs" (OuterVolumeSpecName: "kube-api-access-xgxvs") pod "d923459f-90f4-4399-80a0-4e22daa1eadf" (UID: "d923459f-90f4-4399-80a0-4e22daa1eadf"). InnerVolumeSpecName "kube-api-access-xgxvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.721555 4981 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.726477 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d923459f-90f4-4399-80a0-4e22daa1eadf-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.726547 4981 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d923459f-90f4-4399-80a0-4e22daa1eadf-ovn-rundir\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.726887 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgxvs\" (UniqueName: \"kubernetes.io/projected/d923459f-90f4-4399-80a0-4e22daa1eadf-kube-api-access-xgxvs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.726962 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d923459f-90f4-4399-80a0-4e22daa1eadf-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.721803 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c" path="/var/lib/kubelet/pods/f7bfe5b3-5f46-4247-ba87-d3ec8fb6b44c/volumes" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.743803 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcfd5f62-e6b9-4a63-8030-df81c9d7b580" path="/var/lib/kubelet/pods/fcfd5f62-e6b9-4a63-8030-df81c9d7b580/volumes" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.767309 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d923459f-90f4-4399-80a0-4e22daa1eadf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d923459f-90f4-4399-80a0-4e22daa1eadf" (UID: "d923459f-90f4-4399-80a0-4e22daa1eadf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.829222 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d923459f-90f4-4399-80a0-4e22daa1eadf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:33 crc kubenswrapper[4981]: E0227 19:20:33.829306 4981 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 27 19:20:33 crc kubenswrapper[4981]: E0227 19:20:33.829357 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-config-data podName:991e04a2-e14a-4987-a7d8-b7f5db5cb8e3 nodeName:}" failed. No retries permitted until 2026-02-27 19:20:35.829341016 +0000 UTC m=+2135.308122176 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-config-data") pod "rabbitmq-server-0" (UID: "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3") : configmap "rabbitmq-config-data" not found Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.942589 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.943037 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-cb28-account-create-update-wm6rr"] Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.943071 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-cb28-account-create-update-wm6rr"] Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.943091 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9ea4-account-create-update-ggv5q"] Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.942834 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d923459f-90f4-4399-80a0-4e22daa1eadf-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "d923459f-90f4-4399-80a0-4e22daa1eadf" (UID: "d923459f-90f4-4399-80a0-4e22daa1eadf"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.943106 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9ea4-account-create-update-ggv5q"] Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.943119 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.943137 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.943152 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.943246 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9966-account-create-update-v784d"] Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.943508 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0aa05f73-e7d2-440b-ab1f-780f23c26272" containerName="glance-log" containerID="cri-o://2affc55a229a8b585a8ade5bf43b2c239ea9c89cb121110f24bf358bb120da2a" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.943592 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.943734 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0aa05f73-e7d2-440b-ab1f-780f23c26272" containerName="glance-httpd" containerID="cri-o://3795787eaffa6416278d5f620720e96bed3cebe839428ad34ee4a6b1bbcfb5ed" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.944278 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0c7f2b23-f800-4970-b530-aac7387e0936" containerName="nova-metadata-log" containerID="cri-o://b68e0bd12b334c4ff9b7ac5e9cc423457c71f31b3839b31ec2c6e093fac9743d" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.944428 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="faa3914e-426b-4791-8199-a7630729baf0" containerName="nova-api-log" containerID="cri-o://9b7e4dbef5e7da71bff472adbb75ceb0867028f6b271bd5767677e483d453167" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.944538 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c1bafd9d-a283-406e-900b-3c5d1aae55fe" containerName="glance-log" containerID="cri-o://c56f4d215954c7c816813a5607ee1845d8bcd2e458593efdcc15c07e8b8dfdc9" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.944613 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0c7f2b23-f800-4970-b530-aac7387e0936" containerName="nova-metadata-metadata" containerID="cri-o://c360cdd61e3163e8b02b644a2169bebd548e95d9e6f4be8bc924e36168d7c4bd" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.944688 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="faa3914e-426b-4791-8199-a7630729baf0" containerName="nova-api-api" containerID="cri-o://1ab68f63ecb4d970b493500d1f84ddfa479978e09bb4f5454405ac3cff3972ba" gracePeriod=30 Feb 27 19:20:33 crc kubenswrapper[4981]: I0227 19:20:33.946570 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c1bafd9d-a283-406e-900b-3c5d1aae55fe" containerName="glance-httpd" containerID="cri-o://9252062e4f9078c805d000ada9f14dcf8cf9e94119dfd3eee20804d3a99f7de4" gracePeriod=30 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.010389 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n5d2t" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.017453 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d923459f-90f4-4399-80a0-4e22daa1eadf-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "d923459f-90f4-4399-80a0-4e22daa1eadf" (UID: "d923459f-90f4-4399-80a0-4e22daa1eadf"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.026534 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e72a-account-create-update-fdvkk"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.033310 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/214d65cb-9030-4093-853c-c1485fc1a30a-var-run\") pod \"214d65cb-9030-4093-853c-c1485fc1a30a\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.033534 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214d65cb-9030-4093-853c-c1485fc1a30a-combined-ca-bundle\") pod \"214d65cb-9030-4093-853c-c1485fc1a30a\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.033687 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/214d65cb-9030-4093-853c-c1485fc1a30a-ovn-controller-tls-certs\") pod \"214d65cb-9030-4093-853c-c1485fc1a30a\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.033795 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/214d65cb-9030-4093-853c-c1485fc1a30a-var-log-ovn\") pod \"214d65cb-9030-4093-853c-c1485fc1a30a\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.033868 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkh5p\" (UniqueName: \"kubernetes.io/projected/214d65cb-9030-4093-853c-c1485fc1a30a-kube-api-access-vkh5p\") pod \"214d65cb-9030-4093-853c-c1485fc1a30a\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.033933 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/214d65cb-9030-4093-853c-c1485fc1a30a-var-run-ovn\") pod \"214d65cb-9030-4093-853c-c1485fc1a30a\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.034028 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/214d65cb-9030-4093-853c-c1485fc1a30a-scripts\") pod \"214d65cb-9030-4093-853c-c1485fc1a30a\" (UID: \"214d65cb-9030-4093-853c-c1485fc1a30a\") " Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.034395 4981 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d923459f-90f4-4399-80a0-4e22daa1eadf-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.034459 4981 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d923459f-90f4-4399-80a0-4e22daa1eadf-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.038796 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/214d65cb-9030-4093-853c-c1485fc1a30a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "214d65cb-9030-4093-853c-c1485fc1a30a" (UID: "214d65cb-9030-4093-853c-c1485fc1a30a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.038921 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/214d65cb-9030-4093-853c-c1485fc1a30a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "214d65cb-9030-4093-853c-c1485fc1a30a" (UID: "214d65cb-9030-4093-853c-c1485fc1a30a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.038970 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/214d65cb-9030-4093-853c-c1485fc1a30a-var-run" (OuterVolumeSpecName: "var-run") pod "214d65cb-9030-4093-853c-c1485fc1a30a" (UID: "214d65cb-9030-4093-853c-c1485fc1a30a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.039489 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214d65cb-9030-4093-853c-c1485fc1a30a-scripts" (OuterVolumeSpecName: "scripts") pod "214d65cb-9030-4093-853c-c1485fc1a30a" (UID: "214d65cb-9030-4093-853c-c1485fc1a30a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.046739 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/214d65cb-9030-4093-853c-c1485fc1a30a-kube-api-access-vkh5p" (OuterVolumeSpecName: "kube-api-access-vkh5p") pod "214d65cb-9030-4093-853c-c1485fc1a30a" (UID: "214d65cb-9030-4093-853c-c1485fc1a30a"). InnerVolumeSpecName "kube-api-access-vkh5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.065346 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-badb-account-create-update-m5cpb"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.075580 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/214d65cb-9030-4093-853c-c1485fc1a30a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "214d65cb-9030-4093-853c-c1485fc1a30a" (UID: "214d65cb-9030-4093-853c-c1485fc1a30a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.093185 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-a620-account-create-update-dxhm7"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.101486 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-hpvzf"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.110318 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-5kflz"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.122717 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-421d-account-create-update-bbmzr"] Feb 27 19:20:34 crc kubenswrapper[4981]: E0227 19:20:34.131914 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:20:34 crc kubenswrapper[4981]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: if [ -n "" ]; then Feb 27 19:20:34 crc kubenswrapper[4981]: GRANT_DATABASE="" Feb 27 19:20:34 crc kubenswrapper[4981]: else Feb 27 19:20:34 crc kubenswrapper[4981]: GRANT_DATABASE="*" Feb 27 19:20:34 crc kubenswrapper[4981]: fi Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: # going for maximum compatibility here: Feb 27 19:20:34 crc kubenswrapper[4981]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 27 19:20:34 crc kubenswrapper[4981]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 27 19:20:34 crc kubenswrapper[4981]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 27 19:20:34 crc kubenswrapper[4981]: # support updates Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: $MYSQL_CMD < logger="UnhandledError" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.133434 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-bbcc-account-create-update-rc6qw"] Feb 27 19:20:34 crc kubenswrapper[4981]: E0227 19:20:34.133470 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-d9wv4" podUID="75dabc26-0258-4ef3-b0c8-04231f2fa5c5" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.136436 4981 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/214d65cb-9030-4093-853c-c1485fc1a30a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.136521 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/214d65cb-9030-4093-853c-c1485fc1a30a-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.136576 4981 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/214d65cb-9030-4093-853c-c1485fc1a30a-var-run\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.151121 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214d65cb-9030-4093-853c-c1485fc1a30a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.149116 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-fd6854db9-vlzhb"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.152602 4981 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/214d65cb-9030-4093-853c-c1485fc1a30a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.152787 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkh5p\" (UniqueName: \"kubernetes.io/projected/214d65cb-9030-4093-853c-c1485fc1a30a-kube-api-access-vkh5p\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.152975 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-fd6854db9-vlzhb" podUID="3e1537c5-44bf-4b8f-8ea4-07bf58baf21f" containerName="proxy-httpd" containerID="cri-o://952ca9fd397f06e97e6cb589cce8711001ad9b1917f1597f155cbdfe54ecd748" gracePeriod=30 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.153537 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-fd6854db9-vlzhb" podUID="3e1537c5-44bf-4b8f-8ea4-07bf58baf21f" containerName="proxy-server" containerID="cri-o://469a1e71362f66007e5f99a18ff696a214d1ad52159039dec19da2dbfa3d13ff" gracePeriod=30 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.168210 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d57cb309-6812-4de2-a172-8d0896a7d864/ovsdbserver-nb/0.log" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.168374 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-hpvzf"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.169551 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.178222 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-nw57q"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.193304 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-5kflz"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.206566 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-nw57q"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.214812 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.215274 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="4e27e8aa-f220-4415-8670-ca9186161dba" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://6c1a95f2a0729962517d2e152f52fd832e734caecb67ecd85b30fb4674656560" gracePeriod=30 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.226887 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-d9wv4"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.237181 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.240167 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.252626 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-649cdc5f7c-t45d9"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.252925 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-649cdc5f7c-t45d9" podUID="0b5819ab-18f7-4885-a4b9-a6a3401903a1" containerName="barbican-worker-log" containerID="cri-o://e2aebd1ae7faac4c52f84543c2d526e5fb6068d8aa4b2db54d588a78b114286f" gracePeriod=30 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.253446 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-649cdc5f7c-t45d9" podUID="0b5819ab-18f7-4885-a4b9-a6a3401903a1" containerName="barbican-worker" containerID="cri-o://ca32bbbc043bf5c3d443cd95726f4e60c54e238c98114190773e4f7cf04378fe" gracePeriod=30 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.254364 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"d57cb309-6812-4de2-a172-8d0896a7d864\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.254932 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57cb309-6812-4de2-a172-8d0896a7d864-config\") pod \"d57cb309-6812-4de2-a172-8d0896a7d864\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.256154 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d57cb309-6812-4de2-a172-8d0896a7d864-ovsdbserver-nb-tls-certs\") pod \"d57cb309-6812-4de2-a172-8d0896a7d864\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.256396 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d57cb309-6812-4de2-a172-8d0896a7d864-scripts\") pod \"d57cb309-6812-4de2-a172-8d0896a7d864\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.256547 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7mtx\" (UniqueName: \"kubernetes.io/projected/d57cb309-6812-4de2-a172-8d0896a7d864-kube-api-access-g7mtx\") pod \"d57cb309-6812-4de2-a172-8d0896a7d864\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.256651 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d57cb309-6812-4de2-a172-8d0896a7d864-metrics-certs-tls-certs\") pod \"d57cb309-6812-4de2-a172-8d0896a7d864\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.256789 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d57cb309-6812-4de2-a172-8d0896a7d864-ovsdb-rundir\") pod \"d57cb309-6812-4de2-a172-8d0896a7d864\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.256906 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d57cb309-6812-4de2-a172-8d0896a7d864-combined-ca-bundle\") pod \"d57cb309-6812-4de2-a172-8d0896a7d864\" (UID: \"d57cb309-6812-4de2-a172-8d0896a7d864\") " Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.256093 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d57cb309-6812-4de2-a172-8d0896a7d864-config" (OuterVolumeSpecName: "config") pod "d57cb309-6812-4de2-a172-8d0896a7d864" (UID: "d57cb309-6812-4de2-a172-8d0896a7d864"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.262992 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d57cb309-6812-4de2-a172-8d0896a7d864-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "d57cb309-6812-4de2-a172-8d0896a7d864" (UID: "d57cb309-6812-4de2-a172-8d0896a7d864"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.263209 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d57cb309-6812-4de2-a172-8d0896a7d864-scripts" (OuterVolumeSpecName: "scripts") pod "d57cb309-6812-4de2-a172-8d0896a7d864" (UID: "d57cb309-6812-4de2-a172-8d0896a7d864"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.265718 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-76f488968b-rp6r2"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.266075 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-76f488968b-rp6r2" podUID="a912cdfa-b0ce-4ed4-909d-9d1af2a5a879" containerName="barbican-api-log" containerID="cri-o://2bfade5e02f0643fd1bf6ce8f381f0eeac25e9090ba2aefbc0b89a2a24773551" gracePeriod=30 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.266129 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-76f488968b-rp6r2" podUID="a912cdfa-b0ce-4ed4-909d-9d1af2a5a879" containerName="barbican-api" containerID="cri-o://7a49980a63c473dcc320eddd1d497f4ed0bc0d50087c74c6dd29e6e12d599a2d" gracePeriod=30 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.278503 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "d57cb309-6812-4de2-a172-8d0896a7d864" (UID: "d57cb309-6812-4de2-a172-8d0896a7d864"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.287345 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.287662 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" podUID="392b1bc3-d461-4cc5-8d63-64922c6c3d04" containerName="barbican-keystone-listener-log" containerID="cri-o://a80ed4738b33c46d258ade1f4c61824d5ccb8d85f0a684c76db99ee197e25f02" gracePeriod=30 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.287739 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" podUID="392b1bc3-d461-4cc5-8d63-64922c6c3d04" containerName="barbican-keystone-listener" containerID="cri-o://29a09efd35a13f9913a2db4c13af471be771dc66870af5193ce438f581026f26" gracePeriod=30 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.292698 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d57cb309-6812-4de2-a172-8d0896a7d864-kube-api-access-g7mtx" (OuterVolumeSpecName: "kube-api-access-g7mtx") pod "d57cb309-6812-4de2-a172-8d0896a7d864" (UID: "d57cb309-6812-4de2-a172-8d0896a7d864"). InnerVolumeSpecName "kube-api-access-g7mtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.292872 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" containerName="rabbitmq" containerID="cri-o://57f6f61d81b4bb62c7f39a3b1be260072a8b0b4fe66cc915fa2a92ab863c30e7" gracePeriod=604800 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.314233 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.314958 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d83a972b-9d9d-407c-a714-821900bc148e" containerName="nova-scheduler-scheduler" containerID="cri-o://3152c46cc349f7837726327bf3254bd69f1fb1bbbe347a8afb428e8a80072528" gracePeriod=30 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.323300 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/214d65cb-9030-4093-853c-c1485fc1a30a-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "214d65cb-9030-4093-853c-c1485fc1a30a" (UID: "214d65cb-9030-4093-853c-c1485fc1a30a"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.343626 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.359752 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6047b4ff-4778-43fd-8d8e-c84b76ff271e-combined-ca-bundle\") pod \"6047b4ff-4778-43fd-8d8e-c84b76ff271e\" (UID: \"6047b4ff-4778-43fd-8d8e-c84b76ff271e\") " Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.360099 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6047b4ff-4778-43fd-8d8e-c84b76ff271e-openstack-config\") pod \"6047b4ff-4778-43fd-8d8e-c84b76ff271e\" (UID: \"6047b4ff-4778-43fd-8d8e-c84b76ff271e\") " Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.360260 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvhc7\" (UniqueName: \"kubernetes.io/projected/6047b4ff-4778-43fd-8d8e-c84b76ff271e-kube-api-access-fvhc7\") pod \"6047b4ff-4778-43fd-8d8e-c84b76ff271e\" (UID: \"6047b4ff-4778-43fd-8d8e-c84b76ff271e\") " Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.360461 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6047b4ff-4778-43fd-8d8e-c84b76ff271e-openstack-config-secret\") pod \"6047b4ff-4778-43fd-8d8e-c84b76ff271e\" (UID: \"6047b4ff-4778-43fd-8d8e-c84b76ff271e\") " Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.370418 4981 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/214d65cb-9030-4093-853c-c1485fc1a30a-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.370536 4981 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.370628 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57cb309-6812-4de2-a172-8d0896a7d864-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.370720 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d57cb309-6812-4de2-a172-8d0896a7d864-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.370797 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7mtx\" (UniqueName: \"kubernetes.io/projected/d57cb309-6812-4de2-a172-8d0896a7d864-kube-api-access-g7mtx\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.370865 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d57cb309-6812-4de2-a172-8d0896a7d864-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.372570 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6047b4ff-4778-43fd-8d8e-c84b76ff271e-kube-api-access-fvhc7" (OuterVolumeSpecName: "kube-api-access-fvhc7") pod "6047b4ff-4778-43fd-8d8e-c84b76ff271e" (UID: "6047b4ff-4778-43fd-8d8e-c84b76ff271e"). InnerVolumeSpecName "kube-api-access-fvhc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.384516 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="1c22e070-8348-440e-a801-64927da21e98" containerName="galera" containerID="cri-o://156db9f0e659f08402952be0d8b3b765d9002fac585f7beceaa66f2923a4c3d1" gracePeriod=30 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.462197 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9sxrp"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.470553 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.470792 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="caff730d-9210-4de9-b0f1-997e6f5f16c3" containerName="nova-cell1-conductor-conductor" containerID="cri-o://a00132f0ac6dbee951194bcad710a6371433227c3b0775c31e258a5544d129d7" gracePeriod=30 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.481599 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvhc7\" (UniqueName: \"kubernetes.io/projected/6047b4ff-4778-43fd-8d8e-c84b76ff271e-kube-api-access-fvhc7\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.484596 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9sxrp"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.506191 4981 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.521320 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.521581 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="e4ec5ec3-4a83-4c2a-adde-600a759fcec2" containerName="nova-cell0-conductor-conductor" containerID="cri-o://9c670261714a51e0bc1dd408854bbb5ff1ed9f5b62828c8c8ece1900fa737f24" gracePeriod=30 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.602020 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d57cb309-6812-4de2-a172-8d0896a7d864-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d57cb309-6812-4de2-a172-8d0896a7d864" (UID: "d57cb309-6812-4de2-a172-8d0896a7d864"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.607282 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" containerName="rabbitmq" containerID="cri-o://b17d1c158ee9a02d955c961d36f3778f1d0ce99cc8890e879aaabb3483dbe8a8" gracePeriod=604800 Feb 27 19:20:34 crc kubenswrapper[4981]: E0227 19:20:34.612482 4981 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 27 19:20:34 crc kubenswrapper[4981]: E0227 19:20:34.612584 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-config-data podName:f928877c-eaff-4ab4-ae3b-ba6ed721642c nodeName:}" failed. No retries permitted until 2026-02-27 19:20:36.612562769 +0000 UTC m=+2136.091343919 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-config-data") pod "rabbitmq-cell1-server-0" (UID: "f928877c-eaff-4ab4-ae3b-ba6ed721642c") : configmap "rabbitmq-cell1-config-data" not found Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.657543 4981 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.658287 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d57cb309-6812-4de2-a172-8d0896a7d864-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.678919 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-clm2b"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.706391 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-clm2b"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.718425 4981 generic.go:334] "Generic (PLEG): container finished" podID="e719b057-15c7-4204-9cbc-665f6653011f" containerID="98584a232e3fef55da5240ff567aead3a2ca1595c80c0f7568768a774b5bbf94" exitCode=0 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.718590 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" event={"ID":"e719b057-15c7-4204-9cbc-665f6653011f","Type":"ContainerDied","Data":"98584a232e3fef55da5240ff567aead3a2ca1595c80c0f7568768a774b5bbf94"} Feb 27 19:20:34 crc kubenswrapper[4981]: E0227 19:20:34.742122 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:20:34 crc kubenswrapper[4981]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: if [ -n "placement" ]; then Feb 27 19:20:34 crc kubenswrapper[4981]: GRANT_DATABASE="placement" Feb 27 19:20:34 crc kubenswrapper[4981]: else Feb 27 19:20:34 crc kubenswrapper[4981]: GRANT_DATABASE="*" Feb 27 19:20:34 crc kubenswrapper[4981]: fi Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: # going for maximum compatibility here: Feb 27 19:20:34 crc kubenswrapper[4981]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 27 19:20:34 crc kubenswrapper[4981]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 27 19:20:34 crc kubenswrapper[4981]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 27 19:20:34 crc kubenswrapper[4981]: # support updates Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: $MYSQL_CMD < logger="UnhandledError" Feb 27 19:20:34 crc kubenswrapper[4981]: E0227 19:20:34.743546 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-9966-account-create-update-v784d" podUID="71b893f8-fc1b-4dba-b63a-3c759969ae3c" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.751477 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6047b4ff-4778-43fd-8d8e-c84b76ff271e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6047b4ff-4778-43fd-8d8e-c84b76ff271e" (UID: "6047b4ff-4778-43fd-8d8e-c84b76ff271e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.752909 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d923459f-90f4-4399-80a0-4e22daa1eadf/ovn-northd/0.log" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.753022 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d923459f-90f4-4399-80a0-4e22daa1eadf","Type":"ContainerDied","Data":"c75e74454c171b9dd36d124c75f8cb0433369d53595579de8223945018970929"} Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.753092 4981 scope.go:117] "RemoveContainer" containerID="3b617bffdd5cc1d450fde9acb69cf5146fe9369c179986b8ac76e6fe9affd265" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.753666 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6047b4ff-4778-43fd-8d8e-c84b76ff271e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6047b4ff-4778-43fd-8d8e-c84b76ff271e" (UID: "6047b4ff-4778-43fd-8d8e-c84b76ff271e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.765374 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.766920 4981 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6047b4ff-4778-43fd-8d8e-c84b76ff271e-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.767154 4981 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6047b4ff-4778-43fd-8d8e-c84b76ff271e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:34 crc kubenswrapper[4981]: E0227 19:20:34.775569 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:20:34 crc kubenswrapper[4981]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: if [ -n "nova_api" ]; then Feb 27 19:20:34 crc kubenswrapper[4981]: GRANT_DATABASE="nova_api" Feb 27 19:20:34 crc kubenswrapper[4981]: else Feb 27 19:20:34 crc kubenswrapper[4981]: GRANT_DATABASE="*" Feb 27 19:20:34 crc kubenswrapper[4981]: fi Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: # going for maximum compatibility here: Feb 27 19:20:34 crc kubenswrapper[4981]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 27 19:20:34 crc kubenswrapper[4981]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 27 19:20:34 crc kubenswrapper[4981]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 27 19:20:34 crc kubenswrapper[4981]: # support updates Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: $MYSQL_CMD < logger="UnhandledError" Feb 27 19:20:34 crc kubenswrapper[4981]: E0227 19:20:34.777041 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-badb-account-create-update-m5cpb" podUID="a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21" Feb 27 19:20:34 crc kubenswrapper[4981]: E0227 19:20:34.777430 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:20:34 crc kubenswrapper[4981]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: if [ -n "nova_cell1" ]; then Feb 27 19:20:34 crc kubenswrapper[4981]: GRANT_DATABASE="nova_cell1" Feb 27 19:20:34 crc kubenswrapper[4981]: else Feb 27 19:20:34 crc kubenswrapper[4981]: GRANT_DATABASE="*" Feb 27 19:20:34 crc kubenswrapper[4981]: fi Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: # going for maximum compatibility here: Feb 27 19:20:34 crc kubenswrapper[4981]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 27 19:20:34 crc kubenswrapper[4981]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 27 19:20:34 crc kubenswrapper[4981]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 27 19:20:34 crc kubenswrapper[4981]: # support updates Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: $MYSQL_CMD < logger="UnhandledError" Feb 27 19:20:34 crc kubenswrapper[4981]: E0227 19:20:34.780576 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell1-db-secret\\\" not found\"" pod="openstack/nova-cell1-a620-account-create-update-dxhm7" podUID="5eb78cff-4f39-4b26-8cf4-c1c8f64730ae" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.785701 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6047b4ff-4778-43fd-8d8e-c84b76ff271e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6047b4ff-4778-43fd-8d8e-c84b76ff271e" (UID: "6047b4ff-4778-43fd-8d8e-c84b76ff271e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.805526 4981 generic.go:334] "Generic (PLEG): container finished" podID="48fdca2c-4513-4ee6-ad1b-bf69891f5580" containerID="1cbf1ce682a3eeca1567599c6ff529e2db2d20a4965a962ce5c563a3e4dd58f1" exitCode=143 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.805822 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"48fdca2c-4513-4ee6-ad1b-bf69891f5580","Type":"ContainerDied","Data":"1cbf1ce682a3eeca1567599c6ff529e2db2d20a4965a962ce5c563a3e4dd58f1"} Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.809838 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-d9wv4"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.832821 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-6bhp6"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.834160 4981 generic.go:334] "Generic (PLEG): container finished" podID="3e1537c5-44bf-4b8f-8ea4-07bf58baf21f" containerID="469a1e71362f66007e5f99a18ff696a214d1ad52159039dec19da2dbfa3d13ff" exitCode=0 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.834193 4981 generic.go:334] "Generic (PLEG): container finished" podID="3e1537c5-44bf-4b8f-8ea4-07bf58baf21f" containerID="952ca9fd397f06e97e6cb589cce8711001ad9b1917f1597f155cbdfe54ecd748" exitCode=0 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.834236 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-fd6854db9-vlzhb" event={"ID":"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f","Type":"ContainerDied","Data":"469a1e71362f66007e5f99a18ff696a214d1ad52159039dec19da2dbfa3d13ff"} Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.834303 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-fd6854db9-vlzhb" event={"ID":"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f","Type":"ContainerDied","Data":"952ca9fd397f06e97e6cb589cce8711001ad9b1917f1597f155cbdfe54ecd748"} Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.840187 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-6bhp6"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.848766 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d57cb309-6812-4de2-a172-8d0896a7d864-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "d57cb309-6812-4de2-a172-8d0896a7d864" (UID: "d57cb309-6812-4de2-a172-8d0896a7d864"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.852335 4981 generic.go:334] "Generic (PLEG): container finished" podID="a912cdfa-b0ce-4ed4-909d-9d1af2a5a879" containerID="2bfade5e02f0643fd1bf6ce8f381f0eeac25e9090ba2aefbc0b89a2a24773551" exitCode=143 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.852471 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76f488968b-rp6r2" event={"ID":"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879","Type":"ContainerDied","Data":"2bfade5e02f0643fd1bf6ce8f381f0eeac25e9090ba2aefbc0b89a2a24773551"} Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.862091 4981 generic.go:334] "Generic (PLEG): container finished" podID="8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285" containerID="ca40e5e300e579035592e5f61ba56d1c13272badf8162c9e4acf2f73e36e387f" exitCode=0 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.862157 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285","Type":"ContainerDied","Data":"ca40e5e300e579035592e5f61ba56d1c13272badf8162c9e4acf2f73e36e387f"} Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.864329 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9966-account-create-update-v784d"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.871225 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-a620-account-create-update-dxhm7"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.871532 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e48390e6-5fc4-4c7e-983d-8338bf663e75/ovsdbserver-sb/0.log" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.871577 4981 generic.go:334] "Generic (PLEG): container finished" podID="e48390e6-5fc4-4c7e-983d-8338bf663e75" containerID="4fed5c77f575f747ac150d5541be3f78f7462e24e36ffd8348183eb7cc147164" exitCode=143 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.871697 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e48390e6-5fc4-4c7e-983d-8338bf663e75","Type":"ContainerDied","Data":"4fed5c77f575f747ac150d5541be3f78f7462e24e36ffd8348183eb7cc147164"} Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.871731 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e48390e6-5fc4-4c7e-983d-8338bf663e75","Type":"ContainerDied","Data":"69a3f607a6e0f7579865d967b1d4a552c510c27fe2811ab7df2ad0839c99d9d8"} Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.871748 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69a3f607a6e0f7579865d967b1d4a552c510c27fe2811ab7df2ad0839c99d9d8" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.872085 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d57cb309-6812-4de2-a172-8d0896a7d864-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.872114 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6047b4ff-4778-43fd-8d8e-c84b76ff271e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.875733 4981 generic.go:334] "Generic (PLEG): container finished" podID="6047b4ff-4778-43fd-8d8e-c84b76ff271e" containerID="ad16f6eebbf2beec846b93606e3bf6e09e066977ec08c59d3b1d01db8b58e6a8" exitCode=137 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.875868 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.881743 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-badb-account-create-update-m5cpb"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.888864 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-bbcc-account-create-update-rc6qw"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.898340 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-421d-account-create-update-bbmzr"] Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.904269 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d57cb309-6812-4de2-a172-8d0896a7d864-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "d57cb309-6812-4de2-a172-8d0896a7d864" (UID: "d57cb309-6812-4de2-a172-8d0896a7d864"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.915451 4981 generic.go:334] "Generic (PLEG): container finished" podID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerID="2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17" exitCode=0 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.915543 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5xwl7" event={"ID":"a1d85462-e999-48fc-8c36-ce8bbe60ed3d","Type":"ContainerDied","Data":"2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17"} Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.919034 4981 generic.go:334] "Generic (PLEG): container finished" podID="0c7f2b23-f800-4970-b530-aac7387e0936" containerID="b68e0bd12b334c4ff9b7ac5e9cc423457c71f31b3839b31ec2c6e093fac9743d" exitCode=143 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.919096 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c7f2b23-f800-4970-b530-aac7387e0936","Type":"ContainerDied","Data":"b68e0bd12b334c4ff9b7ac5e9cc423457c71f31b3839b31ec2c6e093fac9743d"} Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.920456 4981 generic.go:334] "Generic (PLEG): container finished" podID="0b5819ab-18f7-4885-a4b9-a6a3401903a1" containerID="e2aebd1ae7faac4c52f84543c2d526e5fb6068d8aa4b2db54d588a78b114286f" exitCode=143 Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.929295 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-649cdc5f7c-t45d9" event={"ID":"0b5819ab-18f7-4885-a4b9-a6a3401903a1","Type":"ContainerDied","Data":"e2aebd1ae7faac4c52f84543c2d526e5fb6068d8aa4b2db54d588a78b114286f"} Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.950866 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d9wv4" event={"ID":"75dabc26-0258-4ef3-b0c8-04231f2fa5c5","Type":"ContainerStarted","Data":"8fb5b3df0d7ace588fd7f6ddfffcfde8e231cf6d15bae18ff32d15a421435dfb"} Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.950977 4981 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-d9wv4" secret="" err="secret \"galera-openstack-cell1-dockercfg-hf2br\" not found" Feb 27 19:20:34 crc kubenswrapper[4981]: E0227 19:20:34.952789 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:20:34 crc kubenswrapper[4981]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: if [ -n "" ]; then Feb 27 19:20:34 crc kubenswrapper[4981]: GRANT_DATABASE="" Feb 27 19:20:34 crc kubenswrapper[4981]: else Feb 27 19:20:34 crc kubenswrapper[4981]: GRANT_DATABASE="*" Feb 27 19:20:34 crc kubenswrapper[4981]: fi Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: # going for maximum compatibility here: Feb 27 19:20:34 crc kubenswrapper[4981]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 27 19:20:34 crc kubenswrapper[4981]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 27 19:20:34 crc kubenswrapper[4981]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 27 19:20:34 crc kubenswrapper[4981]: # support updates Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: $MYSQL_CMD < logger="UnhandledError" Feb 27 19:20:34 crc kubenswrapper[4981]: E0227 19:20:34.953765 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:20:34 crc kubenswrapper[4981]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: if [ -n "barbican" ]; then Feb 27 19:20:34 crc kubenswrapper[4981]: GRANT_DATABASE="barbican" Feb 27 19:20:34 crc kubenswrapper[4981]: else Feb 27 19:20:34 crc kubenswrapper[4981]: GRANT_DATABASE="*" Feb 27 19:20:34 crc kubenswrapper[4981]: fi Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: # going for maximum compatibility here: Feb 27 19:20:34 crc kubenswrapper[4981]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 27 19:20:34 crc kubenswrapper[4981]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 27 19:20:34 crc kubenswrapper[4981]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 27 19:20:34 crc kubenswrapper[4981]: # support updates Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: $MYSQL_CMD < logger="UnhandledError" Feb 27 19:20:34 crc kubenswrapper[4981]: E0227 19:20:34.954012 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:20:34 crc kubenswrapper[4981]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: if [ -n "nova_cell0" ]; then Feb 27 19:20:34 crc kubenswrapper[4981]: GRANT_DATABASE="nova_cell0" Feb 27 19:20:34 crc kubenswrapper[4981]: else Feb 27 19:20:34 crc kubenswrapper[4981]: GRANT_DATABASE="*" Feb 27 19:20:34 crc kubenswrapper[4981]: fi Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: # going for maximum compatibility here: Feb 27 19:20:34 crc kubenswrapper[4981]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 27 19:20:34 crc kubenswrapper[4981]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 27 19:20:34 crc kubenswrapper[4981]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 27 19:20:34 crc kubenswrapper[4981]: # support updates Feb 27 19:20:34 crc kubenswrapper[4981]: Feb 27 19:20:34 crc kubenswrapper[4981]: $MYSQL_CMD < logger="UnhandledError" Feb 27 19:20:34 crc kubenswrapper[4981]: E0227 19:20:34.954111 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-d9wv4" podUID="75dabc26-0258-4ef3-b0c8-04231f2fa5c5" Feb 27 19:20:34 crc kubenswrapper[4981]: E0227 19:20:34.955305 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-bbcc-account-create-update-rc6qw" podUID="efe6b6e0-5d7c-4207-b5fc-44f510e301b7" Feb 27 19:20:34 crc kubenswrapper[4981]: E0227 19:20:34.955364 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-421d-account-create-update-bbmzr" podUID="99264d6c-f9e9-4b89-882f-f9024381b3e4" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.956552 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e48390e6-5fc4-4c7e-983d-8338bf663e75/ovsdbserver-sb/0.log" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.956615 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.975377 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:20:34 crc kubenswrapper[4981]: E0227 19:20:34.975589 4981 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Feb 27 19:20:34 crc kubenswrapper[4981]: E0227 19:20:34.975638 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/75dabc26-0258-4ef3-b0c8-04231f2fa5c5-operator-scripts podName:75dabc26-0258-4ef3-b0c8-04231f2fa5c5 nodeName:}" failed. No retries permitted until 2026-02-27 19:20:35.475625553 +0000 UTC m=+2134.954406703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/75dabc26-0258-4ef3-b0c8-04231f2fa5c5-operator-scripts") pod "root-account-create-update-d9wv4" (UID: "75dabc26-0258-4ef3-b0c8-04231f2fa5c5") : configmap "openstack-cell1-scripts" not found Feb 27 19:20:34 crc kubenswrapper[4981]: I0227 19:20:34.975804 4981 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d57cb309-6812-4de2-a172-8d0896a7d864-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003013 4981 generic.go:334] "Generic (PLEG): container finished" podID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerID="77798546322cfdb767abb826f6d72d37c7c97fa182b47831196724af9d277123" exitCode=0 Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003047 4981 generic.go:334] "Generic (PLEG): container finished" podID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerID="93c87ecb8d8bad33d71e9078051a7748cc757e16bcf80e48a23944e5c1b69077" exitCode=0 Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003071 4981 generic.go:334] "Generic (PLEG): container finished" podID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerID="340a6d7be188f87cef0feaea5f958cc9043c49411edd955b9683aab0230bb9ce" exitCode=0 Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003080 4981 generic.go:334] "Generic (PLEG): container finished" podID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerID="24a7799c5cd63e35072f81b37d3932a76fad3192143aeadfc8474ce31dd7dd07" exitCode=0 Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003086 4981 generic.go:334] "Generic (PLEG): container finished" podID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerID="ee08f1be3428c964e3a5c4747f6aa00160451c72e3665c691697f802f5a0bff8" exitCode=0 Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003093 4981 generic.go:334] "Generic (PLEG): container finished" podID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerID="6429fdd1fd1cd3788a688757b026c7af8c055f3fb7254d239ca6600f69c3448f" exitCode=0 Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003100 4981 generic.go:334] "Generic (PLEG): container finished" podID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerID="7dfea33b75db73391310211c5e0efd16be4a0053864fa6f4abfd9bc77f7118f0" exitCode=0 Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003105 4981 generic.go:334] "Generic (PLEG): container finished" podID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerID="80c7a986c413669964ba2fa274f8997a3315fbfd2c8ff1d23dbd74c88b68e595" exitCode=0 Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003111 4981 generic.go:334] "Generic (PLEG): container finished" podID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerID="bd23d8482fb237875074c0a92ce77c62ec21a9f35c2014202018bbdef7e20697" exitCode=0 Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003118 4981 generic.go:334] "Generic (PLEG): container finished" podID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerID="65f6f3c00e9667ac2dc2eaf62c9691a794f16c6916c044f6252dfe67b11c9cec" exitCode=0 Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003125 4981 generic.go:334] "Generic (PLEG): container finished" podID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerID="38569ba465d1fbcc944576c382365600b3972a77a5d42e3a33726b72c23be51a" exitCode=0 Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003131 4981 generic.go:334] "Generic (PLEG): container finished" podID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerID="bd2133012f7ec8d5b23febc4eae98775150d6779cece3953959dd0ebaafac076" exitCode=0 Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003140 4981 generic.go:334] "Generic (PLEG): container finished" podID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerID="9a1a2e131f5761d079c69185c95e394bd577eda00ea0354161ac5ab992f9e3d0" exitCode=0 Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003147 4981 generic.go:334] "Generic (PLEG): container finished" podID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerID="e0eca54f11d429374a0eee69171647db11c1192aa00c288bd9e67f3a6f0c0246" exitCode=0 Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003185 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerDied","Data":"77798546322cfdb767abb826f6d72d37c7c97fa182b47831196724af9d277123"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003211 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerDied","Data":"93c87ecb8d8bad33d71e9078051a7748cc757e16bcf80e48a23944e5c1b69077"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003223 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerDied","Data":"340a6d7be188f87cef0feaea5f958cc9043c49411edd955b9683aab0230bb9ce"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003231 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerDied","Data":"24a7799c5cd63e35072f81b37d3932a76fad3192143aeadfc8474ce31dd7dd07"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003240 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerDied","Data":"ee08f1be3428c964e3a5c4747f6aa00160451c72e3665c691697f802f5a0bff8"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003248 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerDied","Data":"6429fdd1fd1cd3788a688757b026c7af8c055f3fb7254d239ca6600f69c3448f"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003256 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerDied","Data":"7dfea33b75db73391310211c5e0efd16be4a0053864fa6f4abfd9bc77f7118f0"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003264 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerDied","Data":"80c7a986c413669964ba2fa274f8997a3315fbfd2c8ff1d23dbd74c88b68e595"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003272 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerDied","Data":"bd23d8482fb237875074c0a92ce77c62ec21a9f35c2014202018bbdef7e20697"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003281 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerDied","Data":"65f6f3c00e9667ac2dc2eaf62c9691a794f16c6916c044f6252dfe67b11c9cec"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003289 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerDied","Data":"38569ba465d1fbcc944576c382365600b3972a77a5d42e3a33726b72c23be51a"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003298 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerDied","Data":"bd2133012f7ec8d5b23febc4eae98775150d6779cece3953959dd0ebaafac076"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003306 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerDied","Data":"9a1a2e131f5761d079c69185c95e394bd577eda00ea0354161ac5ab992f9e3d0"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.003313 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerDied","Data":"e0eca54f11d429374a0eee69171647db11c1192aa00c288bd9e67f3a6f0c0246"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.007685 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-n5d2t" event={"ID":"214d65cb-9030-4093-853c-c1485fc1a30a","Type":"ContainerDied","Data":"ba63240105611112c7eade014bae6a4f00bd9f02ecaef4b06b4fc559ad12a48b"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.007818 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-n5d2t" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.009258 4981 generic.go:334] "Generic (PLEG): container finished" podID="392b1bc3-d461-4cc5-8d63-64922c6c3d04" containerID="a80ed4738b33c46d258ade1f4c61824d5ccb8d85f0a684c76db99ee197e25f02" exitCode=143 Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.009300 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" event={"ID":"392b1bc3-d461-4cc5-8d63-64922c6c3d04","Type":"ContainerDied","Data":"a80ed4738b33c46d258ade1f4c61824d5ccb8d85f0a684c76db99ee197e25f02"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.012699 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.016250 4981 generic.go:334] "Generic (PLEG): container finished" podID="6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2" containerID="da47666533c186d6e31e8632cdd467e851243fc49eff7f7fcac48865f970ee5b" exitCode=143 Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.016339 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f8d597b78-f58nv" event={"ID":"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2","Type":"ContainerDied","Data":"da47666533c186d6e31e8632cdd467e851243fc49eff7f7fcac48865f970ee5b"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.023629 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.027329 4981 scope.go:117] "RemoveContainer" containerID="63d0d07ec18342868dff13620687889bed168fed03e7ed3e8bab9795de7f6b30" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.028120 4981 generic.go:334] "Generic (PLEG): container finished" podID="faa3914e-426b-4791-8199-a7630729baf0" containerID="9b7e4dbef5e7da71bff472adbb75ceb0867028f6b271bd5767677e483d453167" exitCode=143 Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.028178 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"faa3914e-426b-4791-8199-a7630729baf0","Type":"ContainerDied","Data":"9b7e4dbef5e7da71bff472adbb75ceb0867028f6b271bd5767677e483d453167"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.040807 4981 generic.go:334] "Generic (PLEG): container finished" podID="e691b557-a141-44b1-a2c7-4ba36af55a15" containerID="c280c1755db22cca5a1d60b0780818610aff15154fcb422c9167f6737e22b6d6" exitCode=0 Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.040911 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b6bf89d9-5xrv6" event={"ID":"e691b557-a141-44b1-a2c7-4ba36af55a15","Type":"ContainerDied","Data":"c280c1755db22cca5a1d60b0780818610aff15154fcb422c9167f6737e22b6d6"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.045550 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d57cb309-6812-4de2-a172-8d0896a7d864/ovsdbserver-nb/0.log" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.048560 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d57cb309-6812-4de2-a172-8d0896a7d864","Type":"ContainerDied","Data":"b950381ad932dec85f17cece713ee3798ca0dab46a203fac4c943412d83e3254"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.048665 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.057178 4981 generic.go:334] "Generic (PLEG): container finished" podID="0aa05f73-e7d2-440b-ab1f-780f23c26272" containerID="2affc55a229a8b585a8ade5bf43b2c239ea9c89cb121110f24bf358bb120da2a" exitCode=143 Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.057350 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0aa05f73-e7d2-440b-ab1f-780f23c26272","Type":"ContainerDied","Data":"2affc55a229a8b585a8ade5bf43b2c239ea9c89cb121110f24bf358bb120da2a"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.065799 4981 generic.go:334] "Generic (PLEG): container finished" podID="c1bafd9d-a283-406e-900b-3c5d1aae55fe" containerID="c56f4d215954c7c816813a5607ee1845d8bcd2e458593efdcc15c07e8b8dfdc9" exitCode=143 Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.066658 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c1bafd9d-a283-406e-900b-3c5d1aae55fe","Type":"ContainerDied","Data":"c56f4d215954c7c816813a5607ee1845d8bcd2e458593efdcc15c07e8b8dfdc9"} Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.077727 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e48390e6-5fc4-4c7e-983d-8338bf663e75-ovsdb-rundir\") pod \"e48390e6-5fc4-4c7e-983d-8338bf663e75\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.077780 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48390e6-5fc4-4c7e-983d-8338bf663e75-combined-ca-bundle\") pod \"e48390e6-5fc4-4c7e-983d-8338bf663e75\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.077860 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"e48390e6-5fc4-4c7e-983d-8338bf663e75\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.077911 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48390e6-5fc4-4c7e-983d-8338bf663e75-ovsdbserver-sb-tls-certs\") pod \"e48390e6-5fc4-4c7e-983d-8338bf663e75\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.078012 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e48390e6-5fc4-4c7e-983d-8338bf663e75-config\") pod \"e48390e6-5fc4-4c7e-983d-8338bf663e75\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.078120 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e48390e6-5fc4-4c7e-983d-8338bf663e75-scripts\") pod \"e48390e6-5fc4-4c7e-983d-8338bf663e75\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.078178 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48390e6-5fc4-4c7e-983d-8338bf663e75-metrics-certs-tls-certs\") pod \"e48390e6-5fc4-4c7e-983d-8338bf663e75\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.078230 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnfns\" (UniqueName: \"kubernetes.io/projected/e48390e6-5fc4-4c7e-983d-8338bf663e75-kube-api-access-vnfns\") pod \"e48390e6-5fc4-4c7e-983d-8338bf663e75\" (UID: \"e48390e6-5fc4-4c7e-983d-8338bf663e75\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.079835 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e48390e6-5fc4-4c7e-983d-8338bf663e75-scripts" (OuterVolumeSpecName: "scripts") pod "e48390e6-5fc4-4c7e-983d-8338bf663e75" (UID: "e48390e6-5fc4-4c7e-983d-8338bf663e75"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.080640 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e48390e6-5fc4-4c7e-983d-8338bf663e75-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "e48390e6-5fc4-4c7e-983d-8338bf663e75" (UID: "e48390e6-5fc4-4c7e-983d-8338bf663e75"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.086754 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e48390e6-5fc4-4c7e-983d-8338bf663e75-config" (OuterVolumeSpecName: "config") pod "e48390e6-5fc4-4c7e-983d-8338bf663e75" (UID: "e48390e6-5fc4-4c7e-983d-8338bf663e75"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.105753 4981 scope.go:117] "RemoveContainer" containerID="ad16f6eebbf2beec846b93606e3bf6e09e066977ec08c59d3b1d01db8b58e6a8" Feb 27 19:20:35 crc kubenswrapper[4981]: E0227 19:20:35.106257 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:20:35 crc kubenswrapper[4981]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 27 19:20:35 crc kubenswrapper[4981]: Feb 27 19:20:35 crc kubenswrapper[4981]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 27 19:20:35 crc kubenswrapper[4981]: Feb 27 19:20:35 crc kubenswrapper[4981]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 27 19:20:35 crc kubenswrapper[4981]: Feb 27 19:20:35 crc kubenswrapper[4981]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 27 19:20:35 crc kubenswrapper[4981]: Feb 27 19:20:35 crc kubenswrapper[4981]: if [ -n "glance" ]; then Feb 27 19:20:35 crc kubenswrapper[4981]: GRANT_DATABASE="glance" Feb 27 19:20:35 crc kubenswrapper[4981]: else Feb 27 19:20:35 crc kubenswrapper[4981]: GRANT_DATABASE="*" Feb 27 19:20:35 crc kubenswrapper[4981]: fi Feb 27 19:20:35 crc kubenswrapper[4981]: Feb 27 19:20:35 crc kubenswrapper[4981]: # going for maximum compatibility here: Feb 27 19:20:35 crc kubenswrapper[4981]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 27 19:20:35 crc kubenswrapper[4981]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 27 19:20:35 crc kubenswrapper[4981]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 27 19:20:35 crc kubenswrapper[4981]: # support updates Feb 27 19:20:35 crc kubenswrapper[4981]: Feb 27 19:20:35 crc kubenswrapper[4981]: $MYSQL_CMD < logger="UnhandledError" Feb 27 19:20:35 crc kubenswrapper[4981]: E0227 19:20:35.109001 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-e72a-account-create-update-fdvkk" podUID="5bf5661d-549c-4591-8f93-02bc09f63f29" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.116379 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e48390e6-5fc4-4c7e-983d-8338bf663e75-kube-api-access-vnfns" (OuterVolumeSpecName: "kube-api-access-vnfns") pod "e48390e6-5fc4-4c7e-983d-8338bf663e75" (UID: "e48390e6-5fc4-4c7e-983d-8338bf663e75"). InnerVolumeSpecName "kube-api-access-vnfns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.118964 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "e48390e6-5fc4-4c7e-983d-8338bf663e75" (UID: "e48390e6-5fc4-4c7e-983d-8338bf663e75"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.133838 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e48390e6-5fc4-4c7e-983d-8338bf663e75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e48390e6-5fc4-4c7e-983d-8338bf663e75" (UID: "e48390e6-5fc4-4c7e-983d-8338bf663e75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.137877 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-n5d2t"] Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.153894 4981 scope.go:117] "RemoveContainer" containerID="ad16f6eebbf2beec846b93606e3bf6e09e066977ec08c59d3b1d01db8b58e6a8" Feb 27 19:20:35 crc kubenswrapper[4981]: E0227 19:20:35.155124 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad16f6eebbf2beec846b93606e3bf6e09e066977ec08c59d3b1d01db8b58e6a8\": container with ID starting with ad16f6eebbf2beec846b93606e3bf6e09e066977ec08c59d3b1d01db8b58e6a8 not found: ID does not exist" containerID="ad16f6eebbf2beec846b93606e3bf6e09e066977ec08c59d3b1d01db8b58e6a8" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.155166 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad16f6eebbf2beec846b93606e3bf6e09e066977ec08c59d3b1d01db8b58e6a8"} err="failed to get container status \"ad16f6eebbf2beec846b93606e3bf6e09e066977ec08c59d3b1d01db8b58e6a8\": rpc error: code = NotFound desc = could not find container \"ad16f6eebbf2beec846b93606e3bf6e09e066977ec08c59d3b1d01db8b58e6a8\": container with ID starting with ad16f6eebbf2beec846b93606e3bf6e09e066977ec08c59d3b1d01db8b58e6a8 not found: ID does not exist" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.155195 4981 scope.go:117] "RemoveContainer" containerID="1d7e957ec0b2bda077dee170917b2cb8f7028331f4696bcedbf1e0135c091783" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.180896 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-n5d2t"] Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.182493 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-config\") pod \"e719b057-15c7-4204-9cbc-665f6653011f\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.182577 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-dns-swift-storage-0\") pod \"e719b057-15c7-4204-9cbc-665f6653011f\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.182675 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-ovsdbserver-nb\") pod \"e719b057-15c7-4204-9cbc-665f6653011f\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.182763 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-ovsdbserver-sb\") pod \"e719b057-15c7-4204-9cbc-665f6653011f\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.182814 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vbxt\" (UniqueName: \"kubernetes.io/projected/e719b057-15c7-4204-9cbc-665f6653011f-kube-api-access-7vbxt\") pod \"e719b057-15c7-4204-9cbc-665f6653011f\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.182855 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-dns-svc\") pod \"e719b057-15c7-4204-9cbc-665f6653011f\" (UID: \"e719b057-15c7-4204-9cbc-665f6653011f\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.183375 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnfns\" (UniqueName: \"kubernetes.io/projected/e48390e6-5fc4-4c7e-983d-8338bf663e75-kube-api-access-vnfns\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.183393 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e48390e6-5fc4-4c7e-983d-8338bf663e75-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.183405 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48390e6-5fc4-4c7e-983d-8338bf663e75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.183427 4981 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.183438 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e48390e6-5fc4-4c7e-983d-8338bf663e75-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.183449 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e48390e6-5fc4-4c7e-983d-8338bf663e75-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.186694 4981 scope.go:117] "RemoveContainer" containerID="3a07e6641f50563b3e75f3c14deefbb8d5806ace0ae40d2c05789e3e1dfa6b3c" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.195392 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e719b057-15c7-4204-9cbc-665f6653011f-kube-api-access-7vbxt" (OuterVolumeSpecName: "kube-api-access-7vbxt") pod "e719b057-15c7-4204-9cbc-665f6653011f" (UID: "e719b057-15c7-4204-9cbc-665f6653011f"). InnerVolumeSpecName "kube-api-access-7vbxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.208122 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.215345 4981 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.232065 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e48390e6-5fc4-4c7e-983d-8338bf663e75-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "e48390e6-5fc4-4c7e-983d-8338bf663e75" (UID: "e48390e6-5fc4-4c7e-983d-8338bf663e75"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.233180 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e48390e6-5fc4-4c7e-983d-8338bf663e75-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "e48390e6-5fc4-4c7e-983d-8338bf663e75" (UID: "e48390e6-5fc4-4c7e-983d-8338bf663e75"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.236730 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.247782 4981 scope.go:117] "RemoveContainer" containerID="ca540b7d5b9796b148a47711ca823c44cdded0771435a1f7d0fe11fc82e0c7d3" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.255893 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e719b057-15c7-4204-9cbc-665f6653011f" (UID: "e719b057-15c7-4204-9cbc-665f6653011f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.272147 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e719b057-15c7-4204-9cbc-665f6653011f" (UID: "e719b057-15c7-4204-9cbc-665f6653011f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.273209 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e719b057-15c7-4204-9cbc-665f6653011f" (UID: "e719b057-15c7-4204-9cbc-665f6653011f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.279222 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e719b057-15c7-4204-9cbc-665f6653011f" (UID: "e719b057-15c7-4204-9cbc-665f6653011f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.283221 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-config" (OuterVolumeSpecName: "config") pod "e719b057-15c7-4204-9cbc-665f6653011f" (UID: "e719b057-15c7-4204-9cbc-665f6653011f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.284374 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.285172 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.285193 4981 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48390e6-5fc4-4c7e-983d-8338bf663e75-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.285206 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.285248 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vbxt\" (UniqueName: \"kubernetes.io/projected/e719b057-15c7-4204-9cbc-665f6653011f-kube-api-access-7vbxt\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.285259 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.285270 4981 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.285281 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48390e6-5fc4-4c7e-983d-8338bf663e75-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.285292 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.285302 4981 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e719b057-15c7-4204-9cbc-665f6653011f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.339274 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.386041 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b45ct\" (UniqueName: \"kubernetes.io/projected/392b1bc3-d461-4cc5-8d63-64922c6c3d04-kube-api-access-b45ct\") pod \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\" (UID: \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.386167 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/392b1bc3-d461-4cc5-8d63-64922c6c3d04-config-data\") pod \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\" (UID: \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.386245 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/392b1bc3-d461-4cc5-8d63-64922c6c3d04-config-data-custom\") pod \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\" (UID: \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.386264 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/392b1bc3-d461-4cc5-8d63-64922c6c3d04-logs\") pod \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\" (UID: \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.386560 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392b1bc3-d461-4cc5-8d63-64922c6c3d04-combined-ca-bundle\") pod \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\" (UID: \"392b1bc3-d461-4cc5-8d63-64922c6c3d04\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.390959 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/392b1bc3-d461-4cc5-8d63-64922c6c3d04-logs" (OuterVolumeSpecName: "logs") pod "392b1bc3-d461-4cc5-8d63-64922c6c3d04" (UID: "392b1bc3-d461-4cc5-8d63-64922c6c3d04"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.393805 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392b1bc3-d461-4cc5-8d63-64922c6c3d04-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "392b1bc3-d461-4cc5-8d63-64922c6c3d04" (UID: "392b1bc3-d461-4cc5-8d63-64922c6c3d04"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.394893 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/392b1bc3-d461-4cc5-8d63-64922c6c3d04-kube-api-access-b45ct" (OuterVolumeSpecName: "kube-api-access-b45ct") pod "392b1bc3-d461-4cc5-8d63-64922c6c3d04" (UID: "392b1bc3-d461-4cc5-8d63-64922c6c3d04"). InnerVolumeSpecName "kube-api-access-b45ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.446523 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392b1bc3-d461-4cc5-8d63-64922c6c3d04-config-data" (OuterVolumeSpecName: "config-data") pod "392b1bc3-d461-4cc5-8d63-64922c6c3d04" (UID: "392b1bc3-d461-4cc5-8d63-64922c6c3d04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.464346 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392b1bc3-d461-4cc5-8d63-64922c6c3d04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "392b1bc3-d461-4cc5-8d63-64922c6c3d04" (UID: "392b1bc3-d461-4cc5-8d63-64922c6c3d04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.489806 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-run-httpd\") pod \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.489863 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4885\" (UniqueName: \"kubernetes.io/projected/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-kube-api-access-x4885\") pod \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.489935 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-log-httpd\") pod \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.489968 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-config-data\") pod \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.489996 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-combined-ca-bundle\") pod \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.490158 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-internal-tls-certs\") pod \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.490204 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-public-tls-certs\") pod \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.490268 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-etc-swift\") pod \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\" (UID: \"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.490852 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b45ct\" (UniqueName: \"kubernetes.io/projected/392b1bc3-d461-4cc5-8d63-64922c6c3d04-kube-api-access-b45ct\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.490872 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/392b1bc3-d461-4cc5-8d63-64922c6c3d04-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.490883 4981 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/392b1bc3-d461-4cc5-8d63-64922c6c3d04-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.490893 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/392b1bc3-d461-4cc5-8d63-64922c6c3d04-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.490906 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/392b1bc3-d461-4cc5-8d63-64922c6c3d04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: E0227 19:20:35.490980 4981 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Feb 27 19:20:35 crc kubenswrapper[4981]: E0227 19:20:35.491040 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/75dabc26-0258-4ef3-b0c8-04231f2fa5c5-operator-scripts podName:75dabc26-0258-4ef3-b0c8-04231f2fa5c5 nodeName:}" failed. No retries permitted until 2026-02-27 19:20:36.491021554 +0000 UTC m=+2135.969802714 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/75dabc26-0258-4ef3-b0c8-04231f2fa5c5-operator-scripts") pod "root-account-create-update-d9wv4" (UID: "75dabc26-0258-4ef3-b0c8-04231f2fa5c5") : configmap "openstack-cell1-scripts" not found Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.492159 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3e1537c5-44bf-4b8f-8ea4-07bf58baf21f" (UID: "3e1537c5-44bf-4b8f-8ea4-07bf58baf21f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.493683 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3e1537c5-44bf-4b8f-8ea4-07bf58baf21f" (UID: "3e1537c5-44bf-4b8f-8ea4-07bf58baf21f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.519645 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3e1537c5-44bf-4b8f-8ea4-07bf58baf21f" (UID: "3e1537c5-44bf-4b8f-8ea4-07bf58baf21f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.519756 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-kube-api-access-x4885" (OuterVolumeSpecName: "kube-api-access-x4885") pod "3e1537c5-44bf-4b8f-8ea4-07bf58baf21f" (UID: "3e1537c5-44bf-4b8f-8ea4-07bf58baf21f"). InnerVolumeSpecName "kube-api-access-x4885". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.558409 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e1537c5-44bf-4b8f-8ea4-07bf58baf21f" (UID: "3e1537c5-44bf-4b8f-8ea4-07bf58baf21f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.562646 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3e1537c5-44bf-4b8f-8ea4-07bf58baf21f" (UID: "3e1537c5-44bf-4b8f-8ea4-07bf58baf21f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.577989 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-config-data" (OuterVolumeSpecName: "config-data") pod "3e1537c5-44bf-4b8f-8ea4-07bf58baf21f" (UID: "3e1537c5-44bf-4b8f-8ea4-07bf58baf21f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.592919 4981 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.592949 4981 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.592959 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4885\" (UniqueName: \"kubernetes.io/projected/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-kube-api-access-x4885\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.592969 4981 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.592978 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.592986 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.592994 4981 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.644962 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3e1537c5-44bf-4b8f-8ea4-07bf58baf21f" (UID: "3e1537c5-44bf-4b8f-8ea4-07bf58baf21f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.655371 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19242b3f-f738-49a0-be1b-578a62ec5f22" path="/var/lib/kubelet/pods/19242b3f-f738-49a0-be1b-578a62ec5f22/volumes" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.657863 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="214d65cb-9030-4093-853c-c1485fc1a30a" path="/var/lib/kubelet/pods/214d65cb-9030-4093-853c-c1485fc1a30a/volumes" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.662506 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b4a8933-c57c-4c72-ba77-e6b637a282ee" path="/var/lib/kubelet/pods/3b4a8933-c57c-4c72-ba77-e6b637a282ee/volumes" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.665529 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6047b4ff-4778-43fd-8d8e-c84b76ff271e" path="/var/lib/kubelet/pods/6047b4ff-4778-43fd-8d8e-c84b76ff271e/volumes" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.666119 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fed081d-f826-4383-b919-126d6a2aa92d" path="/var/lib/kubelet/pods/6fed081d-f826-4383-b919-126d6a2aa92d/volumes" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.675771 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94eef5c5-d31c-4759-995e-ce36727018f1" path="/var/lib/kubelet/pods/94eef5c5-d31c-4759-995e-ce36727018f1/volumes" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.680759 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99c593ba-9134-4372-8392-6903d47aba28" path="/var/lib/kubelet/pods/99c593ba-9134-4372-8392-6903d47aba28/volumes" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.685705 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aec1d5c5-b41c-4d8b-9810-04a25a18c1b1" path="/var/lib/kubelet/pods/aec1d5c5-b41c-4d8b-9810-04a25a18c1b1/volumes" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.690018 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d57cb309-6812-4de2-a172-8d0896a7d864" path="/var/lib/kubelet/pods/d57cb309-6812-4de2-a172-8d0896a7d864/volumes" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.694925 4981 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.697755 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d923459f-90f4-4399-80a0-4e22daa1eadf" path="/var/lib/kubelet/pods/d923459f-90f4-4399-80a0-4e22daa1eadf/volumes" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.698531 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed0c2fbd-f556-4dba-a374-4f212f96210a" path="/var/lib/kubelet/pods/ed0c2fbd-f556-4dba-a374-4f212f96210a/volumes" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.699676 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1cb5d15-1a22-4c56-a028-11eb02f9e043" path="/var/lib/kubelet/pods/f1cb5d15-1a22-4c56-a028-11eb02f9e043/volumes" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.716696 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.795907 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-nova-novncproxy-tls-certs\") pod \"4e27e8aa-f220-4415-8670-ca9186161dba\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.796403 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-vencrypt-tls-certs\") pod \"4e27e8aa-f220-4415-8670-ca9186161dba\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.796454 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-config-data\") pod \"4e27e8aa-f220-4415-8670-ca9186161dba\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.796500 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-combined-ca-bundle\") pod \"4e27e8aa-f220-4415-8670-ca9186161dba\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.796531 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzpvk\" (UniqueName: \"kubernetes.io/projected/4e27e8aa-f220-4415-8670-ca9186161dba-kube-api-access-fzpvk\") pod \"4e27e8aa-f220-4415-8670-ca9186161dba\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.806922 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e27e8aa-f220-4415-8670-ca9186161dba-kube-api-access-fzpvk" (OuterVolumeSpecName: "kube-api-access-fzpvk") pod "4e27e8aa-f220-4415-8670-ca9186161dba" (UID: "4e27e8aa-f220-4415-8670-ca9186161dba"). InnerVolumeSpecName "kube-api-access-fzpvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.843130 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e27e8aa-f220-4415-8670-ca9186161dba" (UID: "4e27e8aa-f220-4415-8670-ca9186161dba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: E0227 19:20:35.858895 4981 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c22e070_8348_440e_a801_64927da21e98.slice/crio-conmon-156db9f0e659f08402952be0d8b3b765d9002fac585f7beceaa66f2923a4c3d1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode48390e6_5fc4_4c7e_983d_8338bf663e75.slice/crio-69a3f607a6e0f7579865d967b1d4a552c510c27fe2811ab7df2ad0839c99d9d8\": RecentStats: unable to find data in memory cache]" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.868687 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-config-data" (OuterVolumeSpecName: "config-data") pod "4e27e8aa-f220-4415-8670-ca9186161dba" (UID: "4e27e8aa-f220-4415-8670-ca9186161dba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: E0227 19:20:35.903349 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-vencrypt-tls-certs podName:4e27e8aa-f220-4415-8670-ca9186161dba nodeName:}" failed. No retries permitted until 2026-02-27 19:20:36.403306591 +0000 UTC m=+2135.882087751 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "vencrypt-tls-certs" (UniqueName: "kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-vencrypt-tls-certs") pod "4e27e8aa-f220-4415-8670-ca9186161dba" (UID: "4e27e8aa-f220-4415-8670-ca9186161dba") : error deleting /var/lib/kubelet/pods/4e27e8aa-f220-4415-8670-ca9186161dba/volume-subpaths: remove /var/lib/kubelet/pods/4e27e8aa-f220-4415-8670-ca9186161dba/volume-subpaths: no such file or directory Feb 27 19:20:35 crc kubenswrapper[4981]: E0227 19:20:35.905679 4981 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 27 19:20:35 crc kubenswrapper[4981]: E0227 19:20:35.905827 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-config-data podName:991e04a2-e14a-4987-a7d8-b7f5db5cb8e3 nodeName:}" failed. No retries permitted until 2026-02-27 19:20:39.905744216 +0000 UTC m=+2139.384525376 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-config-data") pod "rabbitmq-server-0" (UID: "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3") : configmap "rabbitmq-config-data" not found Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.906664 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.906697 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzpvk\" (UniqueName: \"kubernetes.io/projected/4e27e8aa-f220-4415-8670-ca9186161dba-kube-api-access-fzpvk\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.906717 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.911386 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "4e27e8aa-f220-4415-8670-ca9186161dba" (UID: "4e27e8aa-f220-4415-8670-ca9186161dba"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:35 crc kubenswrapper[4981]: I0227 19:20:35.952334 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.013311 4981 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.077125 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" event={"ID":"e719b057-15c7-4204-9cbc-665f6653011f","Type":"ContainerDied","Data":"568fd33f85655df09f1ea4a11de44487aba76a40653306981d811cec64ccaf45"} Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.077170 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-r2zw4" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.077178 4981 scope.go:117] "RemoveContainer" containerID="98584a232e3fef55da5240ff567aead3a2ca1595c80c0f7568768a774b5bbf94" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.079230 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-badb-account-create-update-m5cpb" event={"ID":"a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21","Type":"ContainerStarted","Data":"eedd2fd6ac96be298b42eb34c961a146433a0c2d06e56d11d2b43c735eba36ed"} Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.081652 4981 generic.go:334] "Generic (PLEG): container finished" podID="1c22e070-8348-440e-a801-64927da21e98" containerID="156db9f0e659f08402952be0d8b3b765d9002fac585f7beceaa66f2923a4c3d1" exitCode=0 Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.081699 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1c22e070-8348-440e-a801-64927da21e98","Type":"ContainerDied","Data":"156db9f0e659f08402952be0d8b3b765d9002fac585f7beceaa66f2923a4c3d1"} Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.081718 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1c22e070-8348-440e-a801-64927da21e98","Type":"ContainerDied","Data":"987f100d8f498575cccb8c24e644d5a86184497fb3949337820498f8b9fff318"} Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.081771 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.088457 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9966-account-create-update-v784d" event={"ID":"71b893f8-fc1b-4dba-b63a-3c759969ae3c","Type":"ContainerStarted","Data":"adbcaccdfb392c0b2a8f91afa24d19374a7bc8c9453e5f4c3207f1aeeac0dea3"} Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.096164 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-fd6854db9-vlzhb" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.096214 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-fd6854db9-vlzhb" event={"ID":"3e1537c5-44bf-4b8f-8ea4-07bf58baf21f","Type":"ContainerDied","Data":"5e437d37a4c480c66b446389b230275009eff9e24954231b4d34cdecae0b30f2"} Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.102112 4981 generic.go:334] "Generic (PLEG): container finished" podID="4e27e8aa-f220-4415-8670-ca9186161dba" containerID="6c1a95f2a0729962517d2e152f52fd832e734caecb67ecd85b30fb4674656560" exitCode=0 Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.102175 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.102190 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4e27e8aa-f220-4415-8670-ca9186161dba","Type":"ContainerDied","Data":"6c1a95f2a0729962517d2e152f52fd832e734caecb67ecd85b30fb4674656560"} Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.102219 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4e27e8aa-f220-4415-8670-ca9186161dba","Type":"ContainerDied","Data":"313deef2c5a8e6f01649d281da41096877fef8abf38e0f04b1d9f6318b78f7f9"} Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.113437 4981 generic.go:334] "Generic (PLEG): container finished" podID="392b1bc3-d461-4cc5-8d63-64922c6c3d04" containerID="29a09efd35a13f9913a2db4c13af471be771dc66870af5193ce438f581026f26" exitCode=0 Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.113591 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.113963 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1c22e070-8348-440e-a801-64927da21e98-config-data-default\") pod \"1c22e070-8348-440e-a801-64927da21e98\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.114131 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkjj4\" (UniqueName: \"kubernetes.io/projected/1c22e070-8348-440e-a801-64927da21e98-kube-api-access-wkjj4\") pod \"1c22e070-8348-440e-a801-64927da21e98\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.114238 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c22e070-8348-440e-a801-64927da21e98-galera-tls-certs\") pod \"1c22e070-8348-440e-a801-64927da21e98\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.114367 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1c22e070-8348-440e-a801-64927da21e98-config-data-generated\") pod \"1c22e070-8348-440e-a801-64927da21e98\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.114396 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"1c22e070-8348-440e-a801-64927da21e98\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.114400 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" event={"ID":"392b1bc3-d461-4cc5-8d63-64922c6c3d04","Type":"ContainerDied","Data":"29a09efd35a13f9913a2db4c13af471be771dc66870af5193ce438f581026f26"} Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.114427 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm" event={"ID":"392b1bc3-d461-4cc5-8d63-64922c6c3d04","Type":"ContainerDied","Data":"d7bae1ce00211aa45d99ba1434230ca85e9fa9ed26d123f5c1c87dd87f21813d"} Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.114458 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1c22e070-8348-440e-a801-64927da21e98-kolla-config\") pod \"1c22e070-8348-440e-a801-64927da21e98\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.114495 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c22e070-8348-440e-a801-64927da21e98-operator-scripts\") pod \"1c22e070-8348-440e-a801-64927da21e98\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.114518 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c22e070-8348-440e-a801-64927da21e98-combined-ca-bundle\") pod \"1c22e070-8348-440e-a801-64927da21e98\" (UID: \"1c22e070-8348-440e-a801-64927da21e98\") " Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.114942 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c22e070-8348-440e-a801-64927da21e98-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "1c22e070-8348-440e-a801-64927da21e98" (UID: "1c22e070-8348-440e-a801-64927da21e98"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.115583 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c22e070-8348-440e-a801-64927da21e98-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "1c22e070-8348-440e-a801-64927da21e98" (UID: "1c22e070-8348-440e-a801-64927da21e98"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.115974 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c22e070-8348-440e-a801-64927da21e98-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "1c22e070-8348-440e-a801-64927da21e98" (UID: "1c22e070-8348-440e-a801-64927da21e98"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.116732 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c22e070-8348-440e-a801-64927da21e98-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c22e070-8348-440e-a801-64927da21e98" (UID: "1c22e070-8348-440e-a801-64927da21e98"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.119798 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-421d-account-create-update-bbmzr" event={"ID":"99264d6c-f9e9-4b89-882f-f9024381b3e4","Type":"ContainerStarted","Data":"47fbf0a45be1fc9f328fed1ae7c5edb5409dc1b3b4f37473d899d3ba2f9c656c"} Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.149006 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c22e070-8348-440e-a801-64927da21e98-kube-api-access-wkjj4" (OuterVolumeSpecName: "kube-api-access-wkjj4") pod "1c22e070-8348-440e-a801-64927da21e98" (UID: "1c22e070-8348-440e-a801-64927da21e98"). InnerVolumeSpecName "kube-api-access-wkjj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.157709 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "mysql-db") pod "1c22e070-8348-440e-a801-64927da21e98" (UID: "1c22e070-8348-440e-a801-64927da21e98"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.158559 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bbcc-account-create-update-rc6qw" event={"ID":"efe6b6e0-5d7c-4207-b5fc-44f510e301b7","Type":"ContainerStarted","Data":"0de0b4f9327bbe8578b7a90d2791fd2f62141cedb1902d04503d4cf3cb137b14"} Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.159783 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-r2zw4"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.165383 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c22e070-8348-440e-a801-64927da21e98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c22e070-8348-440e-a801-64927da21e98" (UID: "1c22e070-8348-440e-a801-64927da21e98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.177850 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a620-account-create-update-dxhm7" event={"ID":"5eb78cff-4f39-4b26-8cf4-c1c8f64730ae","Type":"ContainerStarted","Data":"13e2783306ace3a3c2a1f2f5f8811e608a377b0d6a47979cdab790950dca05a2"} Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.177946 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.192670 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-r2zw4"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.195209 4981 scope.go:117] "RemoveContainer" containerID="4c58a0a538d2f90812bdf3348eeccd9e9b536d604d550f56ce5e709e4e2e2a00" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.208124 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c22e070-8348-440e-a801-64927da21e98-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "1c22e070-8348-440e-a801-64927da21e98" (UID: "1c22e070-8348-440e-a801-64927da21e98"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.217696 4981 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1c22e070-8348-440e-a801-64927da21e98-config-data-generated\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.217736 4981 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.217748 4981 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1c22e070-8348-440e-a801-64927da21e98-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.217756 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c22e070-8348-440e-a801-64927da21e98-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.217766 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c22e070-8348-440e-a801-64927da21e98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.217778 4981 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1c22e070-8348-440e-a801-64927da21e98-config-data-default\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.217786 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkjj4\" (UniqueName: \"kubernetes.io/projected/1c22e070-8348-440e-a801-64927da21e98-kube-api-access-wkjj4\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.217794 4981 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c22e070-8348-440e-a801-64927da21e98-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.222522 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-fd6854db9-vlzhb"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.244067 4981 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.244223 4981 scope.go:117] "RemoveContainer" containerID="156db9f0e659f08402952be0d8b3b765d9002fac585f7beceaa66f2923a4c3d1" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.246511 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-fd6854db9-vlzhb"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.319337 4981 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.329903 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.347210 4981 scope.go:117] "RemoveContainer" containerID="b8bfb43832e3fd67b407a361e72e206020352adf5ad9c8cc0f364e5cfb240b8c" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.360101 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-69d4bd5f7d-zs8qm"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.407808 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.410652 4981 scope.go:117] "RemoveContainer" containerID="156db9f0e659f08402952be0d8b3b765d9002fac585f7beceaa66f2923a4c3d1" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.414713 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"156db9f0e659f08402952be0d8b3b765d9002fac585f7beceaa66f2923a4c3d1\": container with ID starting with 156db9f0e659f08402952be0d8b3b765d9002fac585f7beceaa66f2923a4c3d1 not found: ID does not exist" containerID="156db9f0e659f08402952be0d8b3b765d9002fac585f7beceaa66f2923a4c3d1" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.414761 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"156db9f0e659f08402952be0d8b3b765d9002fac585f7beceaa66f2923a4c3d1"} err="failed to get container status \"156db9f0e659f08402952be0d8b3b765d9002fac585f7beceaa66f2923a4c3d1\": rpc error: code = NotFound desc = could not find container \"156db9f0e659f08402952be0d8b3b765d9002fac585f7beceaa66f2923a4c3d1\": container with ID starting with 156db9f0e659f08402952be0d8b3b765d9002fac585f7beceaa66f2923a4c3d1 not found: ID does not exist" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.414788 4981 scope.go:117] "RemoveContainer" containerID="b8bfb43832e3fd67b407a361e72e206020352adf5ad9c8cc0f364e5cfb240b8c" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.415456 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8bfb43832e3fd67b407a361e72e206020352adf5ad9c8cc0f364e5cfb240b8c\": container with ID starting with b8bfb43832e3fd67b407a361e72e206020352adf5ad9c8cc0f364e5cfb240b8c not found: ID does not exist" containerID="b8bfb43832e3fd67b407a361e72e206020352adf5ad9c8cc0f364e5cfb240b8c" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.415476 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8bfb43832e3fd67b407a361e72e206020352adf5ad9c8cc0f364e5cfb240b8c"} err="failed to get container status \"b8bfb43832e3fd67b407a361e72e206020352adf5ad9c8cc0f364e5cfb240b8c\": rpc error: code = NotFound desc = could not find container \"b8bfb43832e3fd67b407a361e72e206020352adf5ad9c8cc0f364e5cfb240b8c\": container with ID starting with b8bfb43832e3fd67b407a361e72e206020352adf5ad9c8cc0f364e5cfb240b8c not found: ID does not exist" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.415496 4981 scope.go:117] "RemoveContainer" containerID="469a1e71362f66007e5f99a18ff696a214d1ad52159039dec19da2dbfa3d13ff" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.420341 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-vencrypt-tls-certs\") pod \"4e27e8aa-f220-4415-8670-ca9186161dba\" (UID: \"4e27e8aa-f220-4415-8670-ca9186161dba\") " Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.427183 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.428329 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "4e27e8aa-f220-4415-8670-ca9186161dba" (UID: "4e27e8aa-f220-4415-8670-ca9186161dba"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.460165 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fjzpf"] Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.460651 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e1537c5-44bf-4b8f-8ea4-07bf58baf21f" containerName="proxy-server" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.460698 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e1537c5-44bf-4b8f-8ea4-07bf58baf21f" containerName="proxy-server" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.460721 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e27e8aa-f220-4415-8670-ca9186161dba" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.460729 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e27e8aa-f220-4415-8670-ca9186161dba" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.460739 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c22e070-8348-440e-a801-64927da21e98" containerName="galera" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.460749 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c22e070-8348-440e-a801-64927da21e98" containerName="galera" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.460767 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e48390e6-5fc4-4c7e-983d-8338bf663e75" containerName="openstack-network-exporter" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.460775 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e48390e6-5fc4-4c7e-983d-8338bf663e75" containerName="openstack-network-exporter" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.460793 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392b1bc3-d461-4cc5-8d63-64922c6c3d04" containerName="barbican-keystone-listener" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.460800 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="392b1bc3-d461-4cc5-8d63-64922c6c3d04" containerName="barbican-keystone-listener" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.460814 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec1d5c5-b41c-4d8b-9810-04a25a18c1b1" containerName="openstack-network-exporter" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.460821 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec1d5c5-b41c-4d8b-9810-04a25a18c1b1" containerName="openstack-network-exporter" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.460833 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e719b057-15c7-4204-9cbc-665f6653011f" containerName="dnsmasq-dns" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.460843 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e719b057-15c7-4204-9cbc-665f6653011f" containerName="dnsmasq-dns" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.460859 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e48390e6-5fc4-4c7e-983d-8338bf663e75" containerName="ovsdbserver-sb" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.460867 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e48390e6-5fc4-4c7e-983d-8338bf663e75" containerName="ovsdbserver-sb" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.460885 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="214d65cb-9030-4093-853c-c1485fc1a30a" containerName="ovn-controller" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.460892 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="214d65cb-9030-4093-853c-c1485fc1a30a" containerName="ovn-controller" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.460907 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392b1bc3-d461-4cc5-8d63-64922c6c3d04" containerName="barbican-keystone-listener-log" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.460916 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="392b1bc3-d461-4cc5-8d63-64922c6c3d04" containerName="barbican-keystone-listener-log" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.460935 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d923459f-90f4-4399-80a0-4e22daa1eadf" containerName="ovn-northd" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.460943 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="d923459f-90f4-4399-80a0-4e22daa1eadf" containerName="ovn-northd" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.460959 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57cb309-6812-4de2-a172-8d0896a7d864" containerName="ovsdbserver-nb" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.460966 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57cb309-6812-4de2-a172-8d0896a7d864" containerName="ovsdbserver-nb" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.460979 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57cb309-6812-4de2-a172-8d0896a7d864" containerName="openstack-network-exporter" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.460987 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57cb309-6812-4de2-a172-8d0896a7d864" containerName="openstack-network-exporter" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.461002 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e1537c5-44bf-4b8f-8ea4-07bf58baf21f" containerName="proxy-httpd" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.461009 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e1537c5-44bf-4b8f-8ea4-07bf58baf21f" containerName="proxy-httpd" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.461020 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d923459f-90f4-4399-80a0-4e22daa1eadf" containerName="openstack-network-exporter" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.461028 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="d923459f-90f4-4399-80a0-4e22daa1eadf" containerName="openstack-network-exporter" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.461046 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c22e070-8348-440e-a801-64927da21e98" containerName="mysql-bootstrap" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.461071 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c22e070-8348-440e-a801-64927da21e98" containerName="mysql-bootstrap" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.461080 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e719b057-15c7-4204-9cbc-665f6653011f" containerName="init" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.461088 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e719b057-15c7-4204-9cbc-665f6653011f" containerName="init" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.461283 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e1537c5-44bf-4b8f-8ea4-07bf58baf21f" containerName="proxy-server" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.461294 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c22e070-8348-440e-a801-64927da21e98" containerName="galera" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.461305 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec1d5c5-b41c-4d8b-9810-04a25a18c1b1" containerName="openstack-network-exporter" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.461316 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="392b1bc3-d461-4cc5-8d63-64922c6c3d04" containerName="barbican-keystone-listener" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.461323 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e48390e6-5fc4-4c7e-983d-8338bf663e75" containerName="openstack-network-exporter" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.461331 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="392b1bc3-d461-4cc5-8d63-64922c6c3d04" containerName="barbican-keystone-listener-log" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.461337 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="d923459f-90f4-4399-80a0-4e22daa1eadf" containerName="ovn-northd" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.461349 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e48390e6-5fc4-4c7e-983d-8338bf663e75" containerName="ovsdbserver-sb" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.461360 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="d57cb309-6812-4de2-a172-8d0896a7d864" containerName="ovsdbserver-nb" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.461368 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e719b057-15c7-4204-9cbc-665f6653011f" containerName="dnsmasq-dns" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.461377 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e1537c5-44bf-4b8f-8ea4-07bf58baf21f" containerName="proxy-httpd" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.461387 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="d57cb309-6812-4de2-a172-8d0896a7d864" containerName="openstack-network-exporter" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.461396 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="d923459f-90f4-4399-80a0-4e22daa1eadf" containerName="openstack-network-exporter" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.461405 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="214d65cb-9030-4093-853c-c1485fc1a30a" containerName="ovn-controller" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.461413 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e27e8aa-f220-4415-8670-ca9186161dba" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.464349 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fjzpf"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.470251 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fjzpf" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.472541 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.503815 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.511209 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.514671 4981 scope.go:117] "RemoveContainer" containerID="952ca9fd397f06e97e6cb589cce8711001ad9b1917f1597f155cbdfe54ecd748" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.522285 4981 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e27e8aa-f220-4415-8670-ca9186161dba-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.522311 4981 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.522387 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/75dabc26-0258-4ef3-b0c8-04231f2fa5c5-operator-scripts podName:75dabc26-0258-4ef3-b0c8-04231f2fa5c5 nodeName:}" failed. No retries permitted until 2026-02-27 19:20:38.522368363 +0000 UTC m=+2138.001149523 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/75dabc26-0258-4ef3-b0c8-04231f2fa5c5-operator-scripts") pod "root-account-create-update-d9wv4" (UID: "75dabc26-0258-4ef3-b0c8-04231f2fa5c5") : configmap "openstack-cell1-scripts" not found Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.551266 4981 scope.go:117] "RemoveContainer" containerID="6c1a95f2a0729962517d2e152f52fd832e734caecb67ecd85b30fb4674656560" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.553702 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.559153 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0d200585-c61d-43f8-a17e-54f695df7dbe" containerName="ceilometer-central-agent" containerID="cri-o://bda927ae23d6de9d50708df6de11982ad0fda24fdecf895c9e04685dc88ac49b" gracePeriod=30 Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.559297 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0d200585-c61d-43f8-a17e-54f695df7dbe" containerName="proxy-httpd" containerID="cri-o://fae968112d7a204ca91d2a8361567a64886536860a977043c0f6d6e84eeb765b" gracePeriod=30 Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.559336 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0d200585-c61d-43f8-a17e-54f695df7dbe" containerName="sg-core" containerID="cri-o://557acdada7a6927a2b4039b69f2529a1cfea5b22f511fe9433b3df0d998e6ebe" gracePeriod=30 Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.559371 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0d200585-c61d-43f8-a17e-54f695df7dbe" containerName="ceilometer-notification-agent" containerID="cri-o://52ab7ade36a7bea163dae4153632d761e9bf6f316544d5345a2f4fc82200a997" gracePeriod=30 Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.590268 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.590494 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="89937a2b-e16c-4964-a540-5a2f8fe812b7" containerName="kube-state-metrics" containerID="cri-o://6b8b8b2fc415f51817d8b6a2764a8f9e401f55ae49e1275fbc6feb278754299e" gracePeriod=30 Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.625226 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsfr9\" (UniqueName: \"kubernetes.io/projected/81568895-7de5-48b6-a4c0-6601ca0aa244-kube-api-access-lsfr9\") pod \"root-account-create-update-fjzpf\" (UID: \"81568895-7de5-48b6-a4c0-6601ca0aa244\") " pod="openstack/root-account-create-update-fjzpf" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.625386 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81568895-7de5-48b6-a4c0-6601ca0aa244-operator-scripts\") pod \"root-account-create-update-fjzpf\" (UID: \"81568895-7de5-48b6-a4c0-6601ca0aa244\") " pod="openstack/root-account-create-update-fjzpf" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.625542 4981 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.625611 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-config-data podName:f928877c-eaff-4ab4-ae3b-ba6ed721642c nodeName:}" failed. No retries permitted until 2026-02-27 19:20:40.62559596 +0000 UTC m=+2140.104377120 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-config-data") pod "rabbitmq-cell1-server-0" (UID: "f928877c-eaff-4ab4-ae3b-ba6ed721642c") : configmap "rabbitmq-cell1-config-data" not found Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.651990 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bbcc-account-create-update-rc6qw" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.678373 4981 scope.go:117] "RemoveContainer" containerID="6c1a95f2a0729962517d2e152f52fd832e734caecb67ecd85b30fb4674656560" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.679975 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c1a95f2a0729962517d2e152f52fd832e734caecb67ecd85b30fb4674656560\": container with ID starting with 6c1a95f2a0729962517d2e152f52fd832e734caecb67ecd85b30fb4674656560 not found: ID does not exist" containerID="6c1a95f2a0729962517d2e152f52fd832e734caecb67ecd85b30fb4674656560" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.680030 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c1a95f2a0729962517d2e152f52fd832e734caecb67ecd85b30fb4674656560"} err="failed to get container status \"6c1a95f2a0729962517d2e152f52fd832e734caecb67ecd85b30fb4674656560\": rpc error: code = NotFound desc = could not find container \"6c1a95f2a0729962517d2e152f52fd832e734caecb67ecd85b30fb4674656560\": container with ID starting with 6c1a95f2a0729962517d2e152f52fd832e734caecb67ecd85b30fb4674656560 not found: ID does not exist" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.680093 4981 scope.go:117] "RemoveContainer" containerID="29a09efd35a13f9913a2db4c13af471be771dc66870af5193ce438f581026f26" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.716147 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2b17-account-create-update-lp7db"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.719681 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2b17-account-create-update-lp7db" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.728374 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.734329 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll2xj\" (UniqueName: \"kubernetes.io/projected/efe6b6e0-5d7c-4207-b5fc-44f510e301b7-kube-api-access-ll2xj\") pod \"efe6b6e0-5d7c-4207-b5fc-44f510e301b7\" (UID: \"efe6b6e0-5d7c-4207-b5fc-44f510e301b7\") " Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.734453 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efe6b6e0-5d7c-4207-b5fc-44f510e301b7-operator-scripts\") pod \"efe6b6e0-5d7c-4207-b5fc-44f510e301b7\" (UID: \"efe6b6e0-5d7c-4207-b5fc-44f510e301b7\") " Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.735004 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81568895-7de5-48b6-a4c0-6601ca0aa244-operator-scripts\") pod \"root-account-create-update-fjzpf\" (UID: \"81568895-7de5-48b6-a4c0-6601ca0aa244\") " pod="openstack/root-account-create-update-fjzpf" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.735328 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsfr9\" (UniqueName: \"kubernetes.io/projected/81568895-7de5-48b6-a4c0-6601ca0aa244-kube-api-access-lsfr9\") pod \"root-account-create-update-fjzpf\" (UID: \"81568895-7de5-48b6-a4c0-6601ca0aa244\") " pod="openstack/root-account-create-update-fjzpf" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.738407 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efe6b6e0-5d7c-4207-b5fc-44f510e301b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "efe6b6e0-5d7c-4207-b5fc-44f510e301b7" (UID: "efe6b6e0-5d7c-4207-b5fc-44f510e301b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.739981 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81568895-7de5-48b6-a4c0-6601ca0aa244-operator-scripts\") pod \"root-account-create-update-fjzpf\" (UID: \"81568895-7de5-48b6-a4c0-6601ca0aa244\") " pod="openstack/root-account-create-update-fjzpf" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.747300 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efe6b6e0-5d7c-4207-b5fc-44f510e301b7-kube-api-access-ll2xj" (OuterVolumeSpecName: "kube-api-access-ll2xj") pod "efe6b6e0-5d7c-4207-b5fc-44f510e301b7" (UID: "efe6b6e0-5d7c-4207-b5fc-44f510e301b7"). InnerVolumeSpecName "kube-api-access-ll2xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.758181 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2b17-account-create-update-lp7db"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.792108 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.792625 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="b63f8c5e-ff68-4a07-a2a5-5c3290e21669" containerName="memcached" containerID="cri-o://2831b1fbd633eee20cda167168b129b60e56a04ba92a023a388d553dde52965e" gracePeriod=30 Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.806965 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsfr9\" (UniqueName: \"kubernetes.io/projected/81568895-7de5-48b6-a4c0-6601ca0aa244-kube-api-access-lsfr9\") pod \"root-account-create-update-fjzpf\" (UID: \"81568895-7de5-48b6-a4c0-6601ca0aa244\") " pod="openstack/root-account-create-update-fjzpf" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.812839 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fjzpf" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.847662 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/223e876e-b201-4728-b355-6a2385e78766-operator-scripts\") pod \"keystone-2b17-account-create-update-lp7db\" (UID: \"223e876e-b201-4728-b355-6a2385e78766\") " pod="openstack/keystone-2b17-account-create-update-lp7db" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.847816 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnssf\" (UniqueName: \"kubernetes.io/projected/223e876e-b201-4728-b355-6a2385e78766-kube-api-access-dnssf\") pod \"keystone-2b17-account-create-update-lp7db\" (UID: \"223e876e-b201-4728-b355-6a2385e78766\") " pod="openstack/keystone-2b17-account-create-update-lp7db" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.847886 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll2xj\" (UniqueName: \"kubernetes.io/projected/efe6b6e0-5d7c-4207-b5fc-44f510e301b7-kube-api-access-ll2xj\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.847900 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efe6b6e0-5d7c-4207-b5fc-44f510e301b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.854232 4981 scope.go:117] "RemoveContainer" containerID="a80ed4738b33c46d258ade1f4c61824d5ccb8d85f0a684c76db99ee197e25f02" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.854392 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ncckq"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.873117 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ncckq"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.893843 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-crjbt"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.913330 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6b879f46f9-hf222"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.913618 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-6b879f46f9-hf222" podUID="087da308-30ee-4a17-945a-844baf0cf4b4" containerName="keystone-api" containerID="cri-o://59a5b965f6d87fa3d0946ea826d976074eb37a38f37b9809cebb0d08e9b762b3" gracePeriod=30 Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.950374 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/223e876e-b201-4728-b355-6a2385e78766-operator-scripts\") pod \"keystone-2b17-account-create-update-lp7db\" (UID: \"223e876e-b201-4728-b355-6a2385e78766\") " pod="openstack/keystone-2b17-account-create-update-lp7db" Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.950488 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnssf\" (UniqueName: \"kubernetes.io/projected/223e876e-b201-4728-b355-6a2385e78766-kube-api-access-dnssf\") pod \"keystone-2b17-account-create-update-lp7db\" (UID: \"223e876e-b201-4728-b355-6a2385e78766\") " pod="openstack/keystone-2b17-account-create-update-lp7db" Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.951023 4981 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.951121 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/223e876e-b201-4728-b355-6a2385e78766-operator-scripts podName:223e876e-b201-4728-b355-6a2385e78766 nodeName:}" failed. No retries permitted until 2026-02-27 19:20:37.451103007 +0000 UTC m=+2136.929884167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/223e876e-b201-4728-b355-6a2385e78766-operator-scripts") pod "keystone-2b17-account-create-update-lp7db" (UID: "223e876e-b201-4728-b355-6a2385e78766") : configmap "openstack-scripts" not found Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.959253 4981 projected.go:194] Error preparing data for projected volume kube-api-access-dnssf for pod openstack/keystone-2b17-account-create-update-lp7db: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 27 19:20:36 crc kubenswrapper[4981]: E0227 19:20:36.959306 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/223e876e-b201-4728-b355-6a2385e78766-kube-api-access-dnssf podName:223e876e-b201-4728-b355-6a2385e78766 nodeName:}" failed. No retries permitted until 2026-02-27 19:20:37.459291349 +0000 UTC m=+2136.938072509 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dnssf" (UniqueName: "kubernetes.io/projected/223e876e-b201-4728-b355-6a2385e78766-kube-api-access-dnssf") pod "keystone-2b17-account-create-update-lp7db" (UID: "223e876e-b201-4728-b355-6a2385e78766") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.968312 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-crjbt"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.978201 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 19:20:36 crc kubenswrapper[4981]: I0227 19:20:36.988507 4981 scope.go:117] "RemoveContainer" containerID="29a09efd35a13f9913a2db4c13af471be771dc66870af5193ce438f581026f26" Feb 27 19:20:37 crc kubenswrapper[4981]: E0227 19:20:37.001404 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29a09efd35a13f9913a2db4c13af471be771dc66870af5193ce438f581026f26\": container with ID starting with 29a09efd35a13f9913a2db4c13af471be771dc66870af5193ce438f581026f26 not found: ID does not exist" containerID="29a09efd35a13f9913a2db4c13af471be771dc66870af5193ce438f581026f26" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.001570 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29a09efd35a13f9913a2db4c13af471be771dc66870af5193ce438f581026f26"} err="failed to get container status \"29a09efd35a13f9913a2db4c13af471be771dc66870af5193ce438f581026f26\": rpc error: code = NotFound desc = could not find container \"29a09efd35a13f9913a2db4c13af471be771dc66870af5193ce438f581026f26\": container with ID starting with 29a09efd35a13f9913a2db4c13af471be771dc66870af5193ce438f581026f26 not found: ID does not exist" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.001708 4981 scope.go:117] "RemoveContainer" containerID="a80ed4738b33c46d258ade1f4c61824d5ccb8d85f0a684c76db99ee197e25f02" Feb 27 19:20:37 crc kubenswrapper[4981]: E0227 19:20:37.004079 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a80ed4738b33c46d258ade1f4c61824d5ccb8d85f0a684c76db99ee197e25f02\": container with ID starting with a80ed4738b33c46d258ade1f4c61824d5ccb8d85f0a684c76db99ee197e25f02 not found: ID does not exist" containerID="a80ed4738b33c46d258ade1f4c61824d5ccb8d85f0a684c76db99ee197e25f02" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.004155 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a80ed4738b33c46d258ade1f4c61824d5ccb8d85f0a684c76db99ee197e25f02"} err="failed to get container status \"a80ed4738b33c46d258ade1f4c61824d5ccb8d85f0a684c76db99ee197e25f02\": rpc error: code = NotFound desc = could not find container \"a80ed4738b33c46d258ade1f4c61824d5ccb8d85f0a684c76db99ee197e25f02\": container with ID starting with a80ed4738b33c46d258ade1f4c61824d5ccb8d85f0a684c76db99ee197e25f02 not found: ID does not exist" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.010931 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.023727 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.045177 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e72a-account-create-update-fdvkk" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.058770 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9966-account-create-update-v784d" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.072267 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a620-account-create-update-dxhm7" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.084672 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d9wv4" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.096022 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2b17-account-create-update-lp7db"] Feb 27 19:20:37 crc kubenswrapper[4981]: E0227 19:20:37.096963 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-dnssf operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-2b17-account-create-update-lp7db" podUID="223e876e-b201-4728-b355-6a2385e78766" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.105808 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-badb-account-create-update-m5cpb" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.105901 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-421d-account-create-update-bbmzr" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.123744 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fjzpf"] Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.160153 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bf5661d-549c-4591-8f93-02bc09f63f29-operator-scripts\") pod \"5bf5661d-549c-4591-8f93-02bc09f63f29\" (UID: \"5bf5661d-549c-4591-8f93-02bc09f63f29\") " Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.160344 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71b893f8-fc1b-4dba-b63a-3c759969ae3c-operator-scripts\") pod \"71b893f8-fc1b-4dba-b63a-3c759969ae3c\" (UID: \"71b893f8-fc1b-4dba-b63a-3c759969ae3c\") " Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.160461 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hflmp\" (UniqueName: \"kubernetes.io/projected/5bf5661d-549c-4591-8f93-02bc09f63f29-kube-api-access-hflmp\") pod \"5bf5661d-549c-4591-8f93-02bc09f63f29\" (UID: \"5bf5661d-549c-4591-8f93-02bc09f63f29\") " Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.160497 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bblrm\" (UniqueName: \"kubernetes.io/projected/71b893f8-fc1b-4dba-b63a-3c759969ae3c-kube-api-access-bblrm\") pod \"71b893f8-fc1b-4dba-b63a-3c759969ae3c\" (UID: \"71b893f8-fc1b-4dba-b63a-3c759969ae3c\") " Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.162540 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71b893f8-fc1b-4dba-b63a-3c759969ae3c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "71b893f8-fc1b-4dba-b63a-3c759969ae3c" (UID: "71b893f8-fc1b-4dba-b63a-3c759969ae3c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.162620 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bf5661d-549c-4591-8f93-02bc09f63f29-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5bf5661d-549c-4591-8f93-02bc09f63f29" (UID: "5bf5661d-549c-4591-8f93-02bc09f63f29"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.168233 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bf5661d-549c-4591-8f93-02bc09f63f29-kube-api-access-hflmp" (OuterVolumeSpecName: "kube-api-access-hflmp") pod "5bf5661d-549c-4591-8f93-02bc09f63f29" (UID: "5bf5661d-549c-4591-8f93-02bc09f63f29"). InnerVolumeSpecName "kube-api-access-hflmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.168763 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71b893f8-fc1b-4dba-b63a-3c759969ae3c-kube-api-access-bblrm" (OuterVolumeSpecName: "kube-api-access-bblrm") pod "71b893f8-fc1b-4dba-b63a-3c759969ae3c" (UID: "71b893f8-fc1b-4dba-b63a-3c759969ae3c"). InnerVolumeSpecName "kube-api-access-bblrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.194892 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bbcc-account-create-update-rc6qw" event={"ID":"efe6b6e0-5d7c-4207-b5fc-44f510e301b7","Type":"ContainerDied","Data":"0de0b4f9327bbe8578b7a90d2791fd2f62141cedb1902d04503d4cf3cb137b14"} Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.195028 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bbcc-account-create-update-rc6qw" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.203371 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-badb-account-create-update-m5cpb" event={"ID":"a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21","Type":"ContainerDied","Data":"eedd2fd6ac96be298b42eb34c961a146433a0c2d06e56d11d2b43c735eba36ed"} Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.203455 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-badb-account-create-update-m5cpb" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.207524 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e72a-account-create-update-fdvkk" event={"ID":"5bf5661d-549c-4591-8f93-02bc09f63f29","Type":"ContainerDied","Data":"12f3d04b78833a40a4f389a9efc10b665ca7c7b73ba1b44448bd3791c369fe74"} Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.207653 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e72a-account-create-update-fdvkk" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.212698 4981 generic.go:334] "Generic (PLEG): container finished" podID="0d200585-c61d-43f8-a17e-54f695df7dbe" containerID="fae968112d7a204ca91d2a8361567a64886536860a977043c0f6d6e84eeb765b" exitCode=0 Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.212722 4981 generic.go:334] "Generic (PLEG): container finished" podID="0d200585-c61d-43f8-a17e-54f695df7dbe" containerID="557acdada7a6927a2b4039b69f2529a1cfea5b22f511fe9433b3df0d998e6ebe" exitCode=2 Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.212760 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d200585-c61d-43f8-a17e-54f695df7dbe","Type":"ContainerDied","Data":"fae968112d7a204ca91d2a8361567a64886536860a977043c0f6d6e84eeb765b"} Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.212815 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d200585-c61d-43f8-a17e-54f695df7dbe","Type":"ContainerDied","Data":"557acdada7a6927a2b4039b69f2529a1cfea5b22f511fe9433b3df0d998e6ebe"} Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.214336 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d9wv4" event={"ID":"75dabc26-0258-4ef3-b0c8-04231f2fa5c5","Type":"ContainerDied","Data":"8fb5b3df0d7ace588fd7f6ddfffcfde8e231cf6d15bae18ff32d15a421435dfb"} Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.214444 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d9wv4" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.224310 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-421d-account-create-update-bbmzr" event={"ID":"99264d6c-f9e9-4b89-882f-f9024381b3e4","Type":"ContainerDied","Data":"47fbf0a45be1fc9f328fed1ae7c5edb5409dc1b3b4f37473d899d3ba2f9c656c"} Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.224318 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-421d-account-create-update-bbmzr" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.228560 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-a620-account-create-update-dxhm7" event={"ID":"5eb78cff-4f39-4b26-8cf4-c1c8f64730ae","Type":"ContainerDied","Data":"13e2783306ace3a3c2a1f2f5f8811e608a377b0d6a47979cdab790950dca05a2"} Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.228644 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-a620-account-create-update-dxhm7" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.233366 4981 generic.go:334] "Generic (PLEG): container finished" podID="89937a2b-e16c-4964-a540-5a2f8fe812b7" containerID="6b8b8b2fc415f51817d8b6a2764a8f9e401f55ae49e1275fbc6feb278754299e" exitCode=2 Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.233443 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"89937a2b-e16c-4964-a540-5a2f8fe812b7","Type":"ContainerDied","Data":"6b8b8b2fc415f51817d8b6a2764a8f9e401f55ae49e1275fbc6feb278754299e"} Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.236443 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9966-account-create-update-v784d" event={"ID":"71b893f8-fc1b-4dba-b63a-3c759969ae3c","Type":"ContainerDied","Data":"adbcaccdfb392c0b2a8f91afa24d19374a7bc8c9453e5f4c3207f1aeeac0dea3"} Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.236466 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2b17-account-create-update-lp7db" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.236483 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9966-account-create-update-v784d" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.262125 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsldz\" (UniqueName: \"kubernetes.io/projected/99264d6c-f9e9-4b89-882f-f9024381b3e4-kube-api-access-qsldz\") pod \"99264d6c-f9e9-4b89-882f-f9024381b3e4\" (UID: \"99264d6c-f9e9-4b89-882f-f9024381b3e4\") " Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.262214 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj48j\" (UniqueName: \"kubernetes.io/projected/5eb78cff-4f39-4b26-8cf4-c1c8f64730ae-kube-api-access-nj48j\") pod \"5eb78cff-4f39-4b26-8cf4-c1c8f64730ae\" (UID: \"5eb78cff-4f39-4b26-8cf4-c1c8f64730ae\") " Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.262441 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75dabc26-0258-4ef3-b0c8-04231f2fa5c5-operator-scripts\") pod \"75dabc26-0258-4ef3-b0c8-04231f2fa5c5\" (UID: \"75dabc26-0258-4ef3-b0c8-04231f2fa5c5\") " Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.262482 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21-operator-scripts\") pod \"a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21\" (UID: \"a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21\") " Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.262529 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99264d6c-f9e9-4b89-882f-f9024381b3e4-operator-scripts\") pod \"99264d6c-f9e9-4b89-882f-f9024381b3e4\" (UID: \"99264d6c-f9e9-4b89-882f-f9024381b3e4\") " Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.262603 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4r2w\" (UniqueName: \"kubernetes.io/projected/a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21-kube-api-access-w4r2w\") pod \"a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21\" (UID: \"a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21\") " Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.262898 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgqw5\" (UniqueName: \"kubernetes.io/projected/75dabc26-0258-4ef3-b0c8-04231f2fa5c5-kube-api-access-lgqw5\") pod \"75dabc26-0258-4ef3-b0c8-04231f2fa5c5\" (UID: \"75dabc26-0258-4ef3-b0c8-04231f2fa5c5\") " Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.262962 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5eb78cff-4f39-4b26-8cf4-c1c8f64730ae-operator-scripts\") pod \"5eb78cff-4f39-4b26-8cf4-c1c8f64730ae\" (UID: \"5eb78cff-4f39-4b26-8cf4-c1c8f64730ae\") " Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.262961 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75dabc26-0258-4ef3-b0c8-04231f2fa5c5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75dabc26-0258-4ef3-b0c8-04231f2fa5c5" (UID: "75dabc26-0258-4ef3-b0c8-04231f2fa5c5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.263396 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bf5661d-549c-4591-8f93-02bc09f63f29-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.263416 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71b893f8-fc1b-4dba-b63a-3c759969ae3c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.263429 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hflmp\" (UniqueName: \"kubernetes.io/projected/5bf5661d-549c-4591-8f93-02bc09f63f29-kube-api-access-hflmp\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.263443 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bblrm\" (UniqueName: \"kubernetes.io/projected/71b893f8-fc1b-4dba-b63a-3c759969ae3c-kube-api-access-bblrm\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.263454 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75dabc26-0258-4ef3-b0c8-04231f2fa5c5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.263795 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21" (UID: "a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.265227 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99264d6c-f9e9-4b89-882f-f9024381b3e4-kube-api-access-qsldz" (OuterVolumeSpecName: "kube-api-access-qsldz") pod "99264d6c-f9e9-4b89-882f-f9024381b3e4" (UID: "99264d6c-f9e9-4b89-882f-f9024381b3e4"). InnerVolumeSpecName "kube-api-access-qsldz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.266630 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21-kube-api-access-w4r2w" (OuterVolumeSpecName: "kube-api-access-w4r2w") pod "a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21" (UID: "a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21"). InnerVolumeSpecName "kube-api-access-w4r2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.267838 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="918ffa1d-14dc-4215-ad79-e545616bcfc5" containerName="galera" containerID="cri-o://2383d387853c55d3b03208088009493504f3c3e88fd1ec79f4ffae6e55db5669" gracePeriod=30 Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.269277 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75dabc26-0258-4ef3-b0c8-04231f2fa5c5-kube-api-access-lgqw5" (OuterVolumeSpecName: "kube-api-access-lgqw5") pod "75dabc26-0258-4ef3-b0c8-04231f2fa5c5" (UID: "75dabc26-0258-4ef3-b0c8-04231f2fa5c5"). InnerVolumeSpecName "kube-api-access-lgqw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.272389 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eb78cff-4f39-4b26-8cf4-c1c8f64730ae-kube-api-access-nj48j" (OuterVolumeSpecName: "kube-api-access-nj48j") pod "5eb78cff-4f39-4b26-8cf4-c1c8f64730ae" (UID: "5eb78cff-4f39-4b26-8cf4-c1c8f64730ae"). InnerVolumeSpecName "kube-api-access-nj48j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.309577 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eb78cff-4f39-4b26-8cf4-c1c8f64730ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5eb78cff-4f39-4b26-8cf4-c1c8f64730ae" (UID: "5eb78cff-4f39-4b26-8cf4-c1c8f64730ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.309611 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99264d6c-f9e9-4b89-882f-f9024381b3e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "99264d6c-f9e9-4b89-882f-f9024381b3e4" (UID: "99264d6c-f9e9-4b89-882f-f9024381b3e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.336122 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="48fdca2c-4513-4ee6-ad1b-bf69891f5580" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.176:8776/healthcheck\": dial tcp 10.217.0.176:8776: connect: connection refused" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.353298 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2b17-account-create-update-lp7db" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.371416 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgqw5\" (UniqueName: \"kubernetes.io/projected/75dabc26-0258-4ef3-b0c8-04231f2fa5c5-kube-api-access-lgqw5\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.371448 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5eb78cff-4f39-4b26-8cf4-c1c8f64730ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.371459 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsldz\" (UniqueName: \"kubernetes.io/projected/99264d6c-f9e9-4b89-882f-f9024381b3e4-kube-api-access-qsldz\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.371469 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj48j\" (UniqueName: \"kubernetes.io/projected/5eb78cff-4f39-4b26-8cf4-c1c8f64730ae-kube-api-access-nj48j\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.371479 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.371490 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99264d6c-f9e9-4b89-882f-f9024381b3e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.371499 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4r2w\" (UniqueName: \"kubernetes.io/projected/a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21-kube-api-access-w4r2w\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.379434 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e72a-account-create-update-fdvkk"] Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.411227 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e72a-account-create-update-fdvkk"] Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.423781 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0c7f2b23-f800-4970-b530-aac7387e0936" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": read tcp 10.217.0.2:33774->10.217.0.224:8775: read: connection reset by peer" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.423850 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0c7f2b23-f800-4970-b530-aac7387e0936" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": read tcp 10.217.0.2:33788->10.217.0.224:8775: read: connection reset by peer" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.450300 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-9966-account-create-update-v784d"] Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.472739 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnssf\" (UniqueName: \"kubernetes.io/projected/223e876e-b201-4728-b355-6a2385e78766-kube-api-access-dnssf\") pod \"keystone-2b17-account-create-update-lp7db\" (UID: \"223e876e-b201-4728-b355-6a2385e78766\") " pod="openstack/keystone-2b17-account-create-update-lp7db" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.472849 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/223e876e-b201-4728-b355-6a2385e78766-operator-scripts\") pod \"keystone-2b17-account-create-update-lp7db\" (UID: \"223e876e-b201-4728-b355-6a2385e78766\") " pod="openstack/keystone-2b17-account-create-update-lp7db" Feb 27 19:20:37 crc kubenswrapper[4981]: E0227 19:20:37.472988 4981 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 27 19:20:37 crc kubenswrapper[4981]: E0227 19:20:37.473032 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/223e876e-b201-4728-b355-6a2385e78766-operator-scripts podName:223e876e-b201-4728-b355-6a2385e78766 nodeName:}" failed. No retries permitted until 2026-02-27 19:20:38.47301766 +0000 UTC m=+2137.951798820 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/223e876e-b201-4728-b355-6a2385e78766-operator-scripts") pod "keystone-2b17-account-create-update-lp7db" (UID: "223e876e-b201-4728-b355-6a2385e78766") : configmap "openstack-scripts" not found Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.475170 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-9966-account-create-update-v784d"] Feb 27 19:20:37 crc kubenswrapper[4981]: E0227 19:20:37.483849 4981 projected.go:194] Error preparing data for projected volume kube-api-access-dnssf for pod openstack/keystone-2b17-account-create-update-lp7db: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 27 19:20:37 crc kubenswrapper[4981]: E0227 19:20:37.483917 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/223e876e-b201-4728-b355-6a2385e78766-kube-api-access-dnssf podName:223e876e-b201-4728-b355-6a2385e78766 nodeName:}" failed. No retries permitted until 2026-02-27 19:20:38.483895974 +0000 UTC m=+2137.962677134 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dnssf" (UniqueName: "kubernetes.io/projected/223e876e-b201-4728-b355-6a2385e78766-kube-api-access-dnssf") pod "keystone-2b17-account-create-update-lp7db" (UID: "223e876e-b201-4728-b355-6a2385e78766") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.493316 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76f488968b-rp6r2" podUID="a912cdfa-b0ce-4ed4-909d-9d1af2a5a879" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.170:9311/healthcheck\": read tcp 10.217.0.2:43834->10.217.0.170:9311: read: connection reset by peer" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.493421 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76f488968b-rp6r2" podUID="a912cdfa-b0ce-4ed4-909d-9d1af2a5a879" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.170:9311/healthcheck\": read tcp 10.217.0.2:43818->10.217.0.170:9311: read: connection reset by peer" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.493503 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-bbcc-account-create-update-rc6qw"] Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.497631 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-bbcc-account-create-update-rc6qw"] Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.557768 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fjzpf"] Feb 27 19:20:37 crc kubenswrapper[4981]: E0227 19:20:37.565671 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:20:37 crc kubenswrapper[4981]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 27 19:20:37 crc kubenswrapper[4981]: Feb 27 19:20:37 crc kubenswrapper[4981]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 27 19:20:37 crc kubenswrapper[4981]: Feb 27 19:20:37 crc kubenswrapper[4981]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 27 19:20:37 crc kubenswrapper[4981]: Feb 27 19:20:37 crc kubenswrapper[4981]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 27 19:20:37 crc kubenswrapper[4981]: Feb 27 19:20:37 crc kubenswrapper[4981]: if [ -n "" ]; then Feb 27 19:20:37 crc kubenswrapper[4981]: GRANT_DATABASE="" Feb 27 19:20:37 crc kubenswrapper[4981]: else Feb 27 19:20:37 crc kubenswrapper[4981]: GRANT_DATABASE="*" Feb 27 19:20:37 crc kubenswrapper[4981]: fi Feb 27 19:20:37 crc kubenswrapper[4981]: Feb 27 19:20:37 crc kubenswrapper[4981]: # going for maximum compatibility here: Feb 27 19:20:37 crc kubenswrapper[4981]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 27 19:20:37 crc kubenswrapper[4981]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 27 19:20:37 crc kubenswrapper[4981]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 27 19:20:37 crc kubenswrapper[4981]: # support updates Feb 27 19:20:37 crc kubenswrapper[4981]: Feb 27 19:20:37 crc kubenswrapper[4981]: $MYSQL_CMD < logger="UnhandledError" Feb 27 19:20:37 crc kubenswrapper[4981]: E0227 19:20:37.568221 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-fjzpf" podUID="81568895-7de5-48b6-a4c0-6601ca0aa244" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.638656 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c22e070-8348-440e-a801-64927da21e98" path="/var/lib/kubelet/pods/1c22e070-8348-440e-a801-64927da21e98/volumes" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.639270 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="392b1bc3-d461-4cc5-8d63-64922c6c3d04" path="/var/lib/kubelet/pods/392b1bc3-d461-4cc5-8d63-64922c6c3d04/volumes" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.639824 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e1537c5-44bf-4b8f-8ea4-07bf58baf21f" path="/var/lib/kubelet/pods/3e1537c5-44bf-4b8f-8ea4-07bf58baf21f/volumes" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.640781 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e27e8aa-f220-4415-8670-ca9186161dba" path="/var/lib/kubelet/pods/4e27e8aa-f220-4415-8670-ca9186161dba/volumes" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.641283 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5640be3b-ba9b-4530-8bf8-595f0428c3ee" path="/var/lib/kubelet/pods/5640be3b-ba9b-4530-8bf8-595f0428c3ee/volumes" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.641740 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bf5661d-549c-4591-8f93-02bc09f63f29" path="/var/lib/kubelet/pods/5bf5661d-549c-4591-8f93-02bc09f63f29/volumes" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.642579 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71b893f8-fc1b-4dba-b63a-3c759969ae3c" path="/var/lib/kubelet/pods/71b893f8-fc1b-4dba-b63a-3c759969ae3c/volumes" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.642956 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7ebc81e-dae3-428f-9401-ddead1a42cec" path="/var/lib/kubelet/pods/c7ebc81e-dae3-428f-9401-ddead1a42cec/volumes" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.643530 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e48390e6-5fc4-4c7e-983d-8338bf663e75" path="/var/lib/kubelet/pods/e48390e6-5fc4-4c7e-983d-8338bf663e75/volumes" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.644023 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e719b057-15c7-4204-9cbc-665f6653011f" path="/var/lib/kubelet/pods/e719b057-15c7-4204-9cbc-665f6653011f/volumes" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.645156 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efe6b6e0-5d7c-4207-b5fc-44f510e301b7" path="/var/lib/kubelet/pods/efe6b6e0-5d7c-4207-b5fc-44f510e301b7/volumes" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.693025 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.739372 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="c1bafd9d-a283-406e-900b-3c5d1aae55fe" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.191:9292/healthcheck\": dial tcp 10.217.0.191:9292: connect: connection refused" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.739439 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="c1bafd9d-a283-406e-900b-3c5d1aae55fe" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.191:9292/healthcheck\": dial tcp 10.217.0.191:9292: connect: connection refused" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.778030 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89937a2b-e16c-4964-a540-5a2f8fe812b7-combined-ca-bundle\") pod \"89937a2b-e16c-4964-a540-5a2f8fe812b7\" (UID: \"89937a2b-e16c-4964-a540-5a2f8fe812b7\") " Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.778935 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4mz7\" (UniqueName: \"kubernetes.io/projected/89937a2b-e16c-4964-a540-5a2f8fe812b7-kube-api-access-l4mz7\") pod \"89937a2b-e16c-4964-a540-5a2f8fe812b7\" (UID: \"89937a2b-e16c-4964-a540-5a2f8fe812b7\") " Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.779235 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/89937a2b-e16c-4964-a540-5a2f8fe812b7-kube-state-metrics-tls-config\") pod \"89937a2b-e16c-4964-a540-5a2f8fe812b7\" (UID: \"89937a2b-e16c-4964-a540-5a2f8fe812b7\") " Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.779306 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/89937a2b-e16c-4964-a540-5a2f8fe812b7-kube-state-metrics-tls-certs\") pod \"89937a2b-e16c-4964-a540-5a2f8fe812b7\" (UID: \"89937a2b-e16c-4964-a540-5a2f8fe812b7\") " Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.786575 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89937a2b-e16c-4964-a540-5a2f8fe812b7-kube-api-access-l4mz7" (OuterVolumeSpecName: "kube-api-access-l4mz7") pod "89937a2b-e16c-4964-a540-5a2f8fe812b7" (UID: "89937a2b-e16c-4964-a540-5a2f8fe812b7"). InnerVolumeSpecName "kube-api-access-l4mz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.829960 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89937a2b-e16c-4964-a540-5a2f8fe812b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89937a2b-e16c-4964-a540-5a2f8fe812b7" (UID: "89937a2b-e16c-4964-a540-5a2f8fe812b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.834391 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89937a2b-e16c-4964-a540-5a2f8fe812b7-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "89937a2b-e16c-4964-a540-5a2f8fe812b7" (UID: "89937a2b-e16c-4964-a540-5a2f8fe812b7"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.841767 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-421d-account-create-update-bbmzr"] Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.867674 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-421d-account-create-update-bbmzr"] Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.874132 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89937a2b-e16c-4964-a540-5a2f8fe812b7-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "89937a2b-e16c-4964-a540-5a2f8fe812b7" (UID: "89937a2b-e16c-4964-a540-5a2f8fe812b7"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.881917 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4mz7\" (UniqueName: \"kubernetes.io/projected/89937a2b-e16c-4964-a540-5a2f8fe812b7-kube-api-access-l4mz7\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.881969 4981 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/89937a2b-e16c-4964-a540-5a2f8fe812b7-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.881985 4981 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/89937a2b-e16c-4964-a540-5a2f8fe812b7-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.881998 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89937a2b-e16c-4964-a540-5a2f8fe812b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.896024 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-a620-account-create-update-dxhm7"] Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.913443 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-a620-account-create-update-dxhm7"] Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.926076 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-badb-account-create-update-m5cpb"] Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.933184 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-badb-account-create-update-m5cpb"] Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.948105 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-d9wv4"] Feb 27 19:20:37 crc kubenswrapper[4981]: I0227 19:20:37.954247 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-d9wv4"] Feb 27 19:20:38 crc kubenswrapper[4981]: E0227 19:20:38.147527 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17 is running failed: container process not found" containerID="2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 19:20:38 crc kubenswrapper[4981]: E0227 19:20:38.152292 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17 is running failed: container process not found" containerID="2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 19:20:38 crc kubenswrapper[4981]: E0227 19:20:38.152292 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ee6fb96a4b332552632dd9dc8737353181f1aa6fa1493b16b542b6e241c02773" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 19:20:38 crc kubenswrapper[4981]: E0227 19:20:38.152922 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17 is running failed: container process not found" containerID="2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 19:20:38 crc kubenswrapper[4981]: E0227 19:20:38.152965 4981 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-5xwl7" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovsdb-server" Feb 27 19:20:38 crc kubenswrapper[4981]: E0227 19:20:38.162395 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ee6fb96a4b332552632dd9dc8737353181f1aa6fa1493b16b542b6e241c02773" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 19:20:38 crc kubenswrapper[4981]: E0227 19:20:38.167949 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ee6fb96a4b332552632dd9dc8737353181f1aa6fa1493b16b542b6e241c02773" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 19:20:38 crc kubenswrapper[4981]: E0227 19:20:38.168014 4981 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-5xwl7" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovs-vswitchd" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.276791 4981 generic.go:334] "Generic (PLEG): container finished" podID="48fdca2c-4513-4ee6-ad1b-bf69891f5580" containerID="3a24b6b7046d00eaee078203e6b423a21700f864b03fcfa22beb510090d24c3b" exitCode=0 Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.276866 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"48fdca2c-4513-4ee6-ad1b-bf69891f5580","Type":"ContainerDied","Data":"3a24b6b7046d00eaee078203e6b423a21700f864b03fcfa22beb510090d24c3b"} Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.282998 4981 generic.go:334] "Generic (PLEG): container finished" podID="a912cdfa-b0ce-4ed4-909d-9d1af2a5a879" containerID="7a49980a63c473dcc320eddd1d497f4ed0bc0d50087c74c6dd29e6e12d599a2d" exitCode=0 Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.283131 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76f488968b-rp6r2" event={"ID":"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879","Type":"ContainerDied","Data":"7a49980a63c473dcc320eddd1d497f4ed0bc0d50087c74c6dd29e6e12d599a2d"} Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.285667 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"89937a2b-e16c-4964-a540-5a2f8fe812b7","Type":"ContainerDied","Data":"9e003c62fdbf646c5dc7afe3de60d0dd0ac04afdf3bde31e15f44ec669421042"} Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.285716 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.285724 4981 scope.go:117] "RemoveContainer" containerID="6b8b8b2fc415f51817d8b6a2764a8f9e401f55ae49e1275fbc6feb278754299e" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.295605 4981 generic.go:334] "Generic (PLEG): container finished" podID="0aa05f73-e7d2-440b-ab1f-780f23c26272" containerID="3795787eaffa6416278d5f620720e96bed3cebe839428ad34ee4a6b1bbcfb5ed" exitCode=0 Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.295685 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0aa05f73-e7d2-440b-ab1f-780f23c26272","Type":"ContainerDied","Data":"3795787eaffa6416278d5f620720e96bed3cebe839428ad34ee4a6b1bbcfb5ed"} Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.297508 4981 generic.go:334] "Generic (PLEG): container finished" podID="e4ec5ec3-4a83-4c2a-adde-600a759fcec2" containerID="9c670261714a51e0bc1dd408854bbb5ff1ed9f5b62828c8c8ece1900fa737f24" exitCode=0 Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.297525 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e4ec5ec3-4a83-4c2a-adde-600a759fcec2","Type":"ContainerDied","Data":"9c670261714a51e0bc1dd408854bbb5ff1ed9f5b62828c8c8ece1900fa737f24"} Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.298694 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fjzpf" event={"ID":"81568895-7de5-48b6-a4c0-6601ca0aa244","Type":"ContainerStarted","Data":"bb4fc6f3f4c4d66cf1dc0bb1a21dde33bd74efd8c647d126e4c87d182af002ba"} Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.302474 4981 generic.go:334] "Generic (PLEG): container finished" podID="6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2" containerID="dae852a53f7febec558df780ac57acd7d91cce2fba1b3b86d956c36653347faa" exitCode=0 Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.302509 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f8d597b78-f58nv" event={"ID":"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2","Type":"ContainerDied","Data":"dae852a53f7febec558df780ac57acd7d91cce2fba1b3b86d956c36653347faa"} Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.304107 4981 generic.go:334] "Generic (PLEG): container finished" podID="b63f8c5e-ff68-4a07-a2a5-5c3290e21669" containerID="2831b1fbd633eee20cda167168b129b60e56a04ba92a023a388d553dde52965e" exitCode=0 Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.304163 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b63f8c5e-ff68-4a07-a2a5-5c3290e21669","Type":"ContainerDied","Data":"2831b1fbd633eee20cda167168b129b60e56a04ba92a023a388d553dde52965e"} Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.310860 4981 generic.go:334] "Generic (PLEG): container finished" podID="c1bafd9d-a283-406e-900b-3c5d1aae55fe" containerID="9252062e4f9078c805d000ada9f14dcf8cf9e94119dfd3eee20804d3a99f7de4" exitCode=0 Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.310946 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c1bafd9d-a283-406e-900b-3c5d1aae55fe","Type":"ContainerDied","Data":"9252062e4f9078c805d000ada9f14dcf8cf9e94119dfd3eee20804d3a99f7de4"} Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.318161 4981 generic.go:334] "Generic (PLEG): container finished" podID="faa3914e-426b-4791-8199-a7630729baf0" containerID="1ab68f63ecb4d970b493500d1f84ddfa479978e09bb4f5454405ac3cff3972ba" exitCode=0 Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.318225 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"faa3914e-426b-4791-8199-a7630729baf0","Type":"ContainerDied","Data":"1ab68f63ecb4d970b493500d1f84ddfa479978e09bb4f5454405ac3cff3972ba"} Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.320406 4981 generic.go:334] "Generic (PLEG): container finished" podID="0c7f2b23-f800-4970-b530-aac7387e0936" containerID="c360cdd61e3163e8b02b644a2169bebd548e95d9e6f4be8bc924e36168d7c4bd" exitCode=0 Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.320456 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c7f2b23-f800-4970-b530-aac7387e0936","Type":"ContainerDied","Data":"c360cdd61e3163e8b02b644a2169bebd548e95d9e6f4be8bc924e36168d7c4bd"} Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.323740 4981 generic.go:334] "Generic (PLEG): container finished" podID="0d200585-c61d-43f8-a17e-54f695df7dbe" containerID="bda927ae23d6de9d50708df6de11982ad0fda24fdecf895c9e04685dc88ac49b" exitCode=0 Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.323825 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d200585-c61d-43f8-a17e-54f695df7dbe","Type":"ContainerDied","Data":"bda927ae23d6de9d50708df6de11982ad0fda24fdecf895c9e04685dc88ac49b"} Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.327271 4981 generic.go:334] "Generic (PLEG): container finished" podID="d83a972b-9d9d-407c-a714-821900bc148e" containerID="3152c46cc349f7837726327bf3254bd69f1fb1bbbe347a8afb428e8a80072528" exitCode=0 Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.327361 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2b17-account-create-update-lp7db" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.327988 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d83a972b-9d9d-407c-a714-821900bc148e","Type":"ContainerDied","Data":"3152c46cc349f7837726327bf3254bd69f1fb1bbbe347a8afb428e8a80072528"} Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.339099 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.495674 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnssf\" (UniqueName: \"kubernetes.io/projected/223e876e-b201-4728-b355-6a2385e78766-kube-api-access-dnssf\") pod \"keystone-2b17-account-create-update-lp7db\" (UID: \"223e876e-b201-4728-b355-6a2385e78766\") " pod="openstack/keystone-2b17-account-create-update-lp7db" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.496199 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/223e876e-b201-4728-b355-6a2385e78766-operator-scripts\") pod \"keystone-2b17-account-create-update-lp7db\" (UID: \"223e876e-b201-4728-b355-6a2385e78766\") " pod="openstack/keystone-2b17-account-create-update-lp7db" Feb 27 19:20:38 crc kubenswrapper[4981]: E0227 19:20:38.496439 4981 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 27 19:20:38 crc kubenswrapper[4981]: E0227 19:20:38.496542 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/223e876e-b201-4728-b355-6a2385e78766-operator-scripts podName:223e876e-b201-4728-b355-6a2385e78766 nodeName:}" failed. No retries permitted until 2026-02-27 19:20:40.496524697 +0000 UTC m=+2139.975305857 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/223e876e-b201-4728-b355-6a2385e78766-operator-scripts") pod "keystone-2b17-account-create-update-lp7db" (UID: "223e876e-b201-4728-b355-6a2385e78766") : configmap "openstack-scripts" not found Feb 27 19:20:38 crc kubenswrapper[4981]: E0227 19:20:38.499406 4981 projected.go:194] Error preparing data for projected volume kube-api-access-dnssf for pod openstack/keystone-2b17-account-create-update-lp7db: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 27 19:20:38 crc kubenswrapper[4981]: E0227 19:20:38.499540 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/223e876e-b201-4728-b355-6a2385e78766-kube-api-access-dnssf podName:223e876e-b201-4728-b355-6a2385e78766 nodeName:}" failed. No retries permitted until 2026-02-27 19:20:40.499485849 +0000 UTC m=+2139.978267009 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dnssf" (UniqueName: "kubernetes.io/projected/223e876e-b201-4728-b355-6a2385e78766-kube-api-access-dnssf") pod "keystone-2b17-account-create-update-lp7db" (UID: "223e876e-b201-4728-b355-6a2385e78766") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.511207 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.518479 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.543693 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2b17-account-create-update-lp7db"] Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.552006 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2b17-account-create-update-lp7db"] Feb 27 19:20:38 crc kubenswrapper[4981]: E0227 19:20:38.565355 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c670261714a51e0bc1dd408854bbb5ff1ed9f5b62828c8c8ece1900fa737f24 is running failed: container process not found" containerID="9c670261714a51e0bc1dd408854bbb5ff1ed9f5b62828c8c8ece1900fa737f24" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:20:38 crc kubenswrapper[4981]: E0227 19:20:38.566003 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c670261714a51e0bc1dd408854bbb5ff1ed9f5b62828c8c8ece1900fa737f24 is running failed: container process not found" containerID="9c670261714a51e0bc1dd408854bbb5ff1ed9f5b62828c8c8ece1900fa737f24" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:20:38 crc kubenswrapper[4981]: E0227 19:20:38.566383 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c670261714a51e0bc1dd408854bbb5ff1ed9f5b62828c8c8ece1900fa737f24 is running failed: container process not found" containerID="9c670261714a51e0bc1dd408854bbb5ff1ed9f5b62828c8c8ece1900fa737f24" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:20:38 crc kubenswrapper[4981]: E0227 19:20:38.566414 4981 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c670261714a51e0bc1dd408854bbb5ff1ed9f5b62828c8c8ece1900fa737f24 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="e4ec5ec3-4a83-4c2a-adde-600a759fcec2" containerName="nova-cell0-conductor-conductor" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.604343 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.699185 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-config-data\") pod \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.699384 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pvsv\" (UniqueName: \"kubernetes.io/projected/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-kube-api-access-7pvsv\") pod \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.699452 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-public-tls-certs\") pod \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.699484 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-internal-tls-certs\") pod \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.699565 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-scripts\") pod \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.699609 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-logs\") pod \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.699859 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-combined-ca-bundle\") pod \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\" (UID: \"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.700560 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnssf\" (UniqueName: \"kubernetes.io/projected/223e876e-b201-4728-b355-6a2385e78766-kube-api-access-dnssf\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.700964 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/223e876e-b201-4728-b355-6a2385e78766-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.706946 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-scripts" (OuterVolumeSpecName: "scripts") pod "6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2" (UID: "6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.708560 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-logs" (OuterVolumeSpecName: "logs") pod "6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2" (UID: "6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.712773 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-kube-api-access-7pvsv" (OuterVolumeSpecName: "kube-api-access-7pvsv") pod "6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2" (UID: "6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2"). InnerVolumeSpecName "kube-api-access-7pvsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.763213 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2" (UID: "6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.767581 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-config-data" (OuterVolumeSpecName: "config-data") pod "6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2" (UID: "6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.771140 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.793232 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.807483 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.807509 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.807520 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.807531 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pvsv\" (UniqueName: \"kubernetes.io/projected/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-kube-api-access-7pvsv\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.807539 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.816797 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.857259 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fjzpf" Feb 27 19:20:38 crc kubenswrapper[4981]: E0227 19:20:38.872696 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3152c46cc349f7837726327bf3254bd69f1fb1bbbe347a8afb428e8a80072528 is running failed: container process not found" containerID="3152c46cc349f7837726327bf3254bd69f1fb1bbbe347a8afb428e8a80072528" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.882269 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2" (UID: "6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:38 crc kubenswrapper[4981]: E0227 19:20:38.883080 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3152c46cc349f7837726327bf3254bd69f1fb1bbbe347a8afb428e8a80072528 is running failed: container process not found" containerID="3152c46cc349f7837726327bf3254bd69f1fb1bbbe347a8afb428e8a80072528" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 19:20:38 crc kubenswrapper[4981]: E0227 19:20:38.886512 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3152c46cc349f7837726327bf3254bd69f1fb1bbbe347a8afb428e8a80072528 is running failed: container process not found" containerID="3152c46cc349f7837726327bf3254bd69f1fb1bbbe347a8afb428e8a80072528" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 19:20:38 crc kubenswrapper[4981]: E0227 19:20:38.886556 4981 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3152c46cc349f7837726327bf3254bd69f1fb1bbbe347a8afb428e8a80072528 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d83a972b-9d9d-407c-a714-821900bc148e" containerName="nova-scheduler-scheduler" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909266 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-config-data\") pod \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909320 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-config-data\") pod \"0aa05f73-e7d2-440b-ab1f-780f23c26272\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909396 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9sk9\" (UniqueName: \"kubernetes.io/projected/0aa05f73-e7d2-440b-ab1f-780f23c26272-kube-api-access-z9sk9\") pod \"0aa05f73-e7d2-440b-ab1f-780f23c26272\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909426 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-combined-ca-bundle\") pod \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909460 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmj9n\" (UniqueName: \"kubernetes.io/projected/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-kube-api-access-gmj9n\") pod \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909485 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-config-data-custom\") pod \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909515 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-config-data\") pod \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909538 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-combined-ca-bundle\") pod \"0aa05f73-e7d2-440b-ab1f-780f23c26272\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909569 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-combined-ca-bundle\") pod \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909599 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"0aa05f73-e7d2-440b-ab1f-780f23c26272\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909626 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48fdca2c-4513-4ee6-ad1b-bf69891f5580-logs\") pod \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909683 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-internal-tls-certs\") pod \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909716 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-internal-tls-certs\") pod \"0aa05f73-e7d2-440b-ab1f-780f23c26272\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909762 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-logs\") pod \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909783 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8wpz\" (UniqueName: \"kubernetes.io/projected/48fdca2c-4513-4ee6-ad1b-bf69891f5580-kube-api-access-r8wpz\") pod \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909808 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-public-tls-certs\") pod \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909869 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-scripts\") pod \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909900 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa05f73-e7d2-440b-ab1f-780f23c26272-logs\") pod \"0aa05f73-e7d2-440b-ab1f-780f23c26272\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909942 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-scripts\") pod \"0aa05f73-e7d2-440b-ab1f-780f23c26272\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909970 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0aa05f73-e7d2-440b-ab1f-780f23c26272-httpd-run\") pod \"0aa05f73-e7d2-440b-ab1f-780f23c26272\" (UID: \"0aa05f73-e7d2-440b-ab1f-780f23c26272\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.909997 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/48fdca2c-4513-4ee6-ad1b-bf69891f5580-etc-machine-id\") pod \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.910023 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-public-tls-certs\") pod \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.910074 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-internal-tls-certs\") pod \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\" (UID: \"48fdca2c-4513-4ee6-ad1b-bf69891f5580\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.910119 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-config-data-custom\") pod \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\" (UID: \"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879\") " Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.910636 4981 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.912295 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48fdca2c-4513-4ee6-ad1b-bf69891f5580-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "48fdca2c-4513-4ee6-ad1b-bf69891f5580" (UID: "48fdca2c-4513-4ee6-ad1b-bf69891f5580"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.915876 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0aa05f73-e7d2-440b-ab1f-780f23c26272-logs" (OuterVolumeSpecName: "logs") pod "0aa05f73-e7d2-440b-ab1f-780f23c26272" (UID: "0aa05f73-e7d2-440b-ab1f-780f23c26272"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.916754 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0aa05f73-e7d2-440b-ab1f-780f23c26272-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0aa05f73-e7d2-440b-ab1f-780f23c26272" (UID: "0aa05f73-e7d2-440b-ab1f-780f23c26272"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.917196 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a912cdfa-b0ce-4ed4-909d-9d1af2a5a879" (UID: "a912cdfa-b0ce-4ed4-909d-9d1af2a5a879"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.917943 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-logs" (OuterVolumeSpecName: "logs") pod "a912cdfa-b0ce-4ed4-909d-9d1af2a5a879" (UID: "a912cdfa-b0ce-4ed4-909d-9d1af2a5a879"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.919160 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48fdca2c-4513-4ee6-ad1b-bf69891f5580-logs" (OuterVolumeSpecName: "logs") pod "48fdca2c-4513-4ee6-ad1b-bf69891f5580" (UID: "48fdca2c-4513-4ee6-ad1b-bf69891f5580"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.919570 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-scripts" (OuterVolumeSpecName: "scripts") pod "0aa05f73-e7d2-440b-ab1f-780f23c26272" (UID: "0aa05f73-e7d2-440b-ab1f-780f23c26272"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.922234 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "0aa05f73-e7d2-440b-ab1f-780f23c26272" (UID: "0aa05f73-e7d2-440b-ab1f-780f23c26272"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.930470 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-kube-api-access-gmj9n" (OuterVolumeSpecName: "kube-api-access-gmj9n") pod "a912cdfa-b0ce-4ed4-909d-9d1af2a5a879" (UID: "a912cdfa-b0ce-4ed4-909d-9d1af2a5a879"). InnerVolumeSpecName "kube-api-access-gmj9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.930548 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aa05f73-e7d2-440b-ab1f-780f23c26272-kube-api-access-z9sk9" (OuterVolumeSpecName: "kube-api-access-z9sk9") pod "0aa05f73-e7d2-440b-ab1f-780f23c26272" (UID: "0aa05f73-e7d2-440b-ab1f-780f23c26272"). InnerVolumeSpecName "kube-api-access-z9sk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.930697 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48fdca2c-4513-4ee6-ad1b-bf69891f5580-kube-api-access-r8wpz" (OuterVolumeSpecName: "kube-api-access-r8wpz") pod "48fdca2c-4513-4ee6-ad1b-bf69891f5580" (UID: "48fdca2c-4513-4ee6-ad1b-bf69891f5580"). InnerVolumeSpecName "kube-api-access-r8wpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.932217 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.935338 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-scripts" (OuterVolumeSpecName: "scripts") pod "48fdca2c-4513-4ee6-ad1b-bf69891f5580" (UID: "48fdca2c-4513-4ee6-ad1b-bf69891f5580"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.936381 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2" (UID: "6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.952576 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "48fdca2c-4513-4ee6-ad1b-bf69891f5580" (UID: "48fdca2c-4513-4ee6-ad1b-bf69891f5580"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:38 crc kubenswrapper[4981]: I0227 19:20:38.978403 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48fdca2c-4513-4ee6-ad1b-bf69891f5580" (UID: "48fdca2c-4513-4ee6-ad1b-bf69891f5580"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.013679 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ec5ec3-4a83-4c2a-adde-600a759fcec2-config-data\") pod \"e4ec5ec3-4a83-4c2a-adde-600a759fcec2\" (UID: \"e4ec5ec3-4a83-4c2a-adde-600a759fcec2\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.013815 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prtc7\" (UniqueName: \"kubernetes.io/projected/e4ec5ec3-4a83-4c2a-adde-600a759fcec2-kube-api-access-prtc7\") pod \"e4ec5ec3-4a83-4c2a-adde-600a759fcec2\" (UID: \"e4ec5ec3-4a83-4c2a-adde-600a759fcec2\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.013880 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81568895-7de5-48b6-a4c0-6601ca0aa244-operator-scripts\") pod \"81568895-7de5-48b6-a4c0-6601ca0aa244\" (UID: \"81568895-7de5-48b6-a4c0-6601ca0aa244\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.013904 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ec5ec3-4a83-4c2a-adde-600a759fcec2-combined-ca-bundle\") pod \"e4ec5ec3-4a83-4c2a-adde-600a759fcec2\" (UID: \"e4ec5ec3-4a83-4c2a-adde-600a759fcec2\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.014071 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsfr9\" (UniqueName: \"kubernetes.io/projected/81568895-7de5-48b6-a4c0-6601ca0aa244-kube-api-access-lsfr9\") pod \"81568895-7de5-48b6-a4c0-6601ca0aa244\" (UID: \"81568895-7de5-48b6-a4c0-6601ca0aa244\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.014683 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.014707 4981 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.014719 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aa05f73-e7d2-440b-ab1f-780f23c26272-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.014727 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.014737 4981 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0aa05f73-e7d2-440b-ab1f-780f23c26272-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.014745 4981 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/48fdca2c-4513-4ee6-ad1b-bf69891f5580-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.014755 4981 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.014763 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9sk9\" (UniqueName: \"kubernetes.io/projected/0aa05f73-e7d2-440b-ab1f-780f23c26272-kube-api-access-z9sk9\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.014772 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.014784 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmj9n\" (UniqueName: \"kubernetes.io/projected/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-kube-api-access-gmj9n\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.014797 4981 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.014821 4981 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.014830 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48fdca2c-4513-4ee6-ad1b-bf69891f5580-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.014839 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.014847 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8wpz\" (UniqueName: \"kubernetes.io/projected/48fdca2c-4513-4ee6-ad1b-bf69891f5580-kube-api-access-r8wpz\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.016617 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81568895-7de5-48b6-a4c0-6601ca0aa244-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81568895-7de5-48b6-a4c0-6601ca0aa244" (UID: "81568895-7de5-48b6-a4c0-6601ca0aa244"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.037221 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0aa05f73-e7d2-440b-ab1f-780f23c26272" (UID: "0aa05f73-e7d2-440b-ab1f-780f23c26272"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.037453 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "48fdca2c-4513-4ee6-ad1b-bf69891f5580" (UID: "48fdca2c-4513-4ee6-ad1b-bf69891f5580"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.040203 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0aa05f73-e7d2-440b-ab1f-780f23c26272" (UID: "0aa05f73-e7d2-440b-ab1f-780f23c26272"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.065812 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-config-data" (OuterVolumeSpecName: "config-data") pod "a912cdfa-b0ce-4ed4-909d-9d1af2a5a879" (UID: "a912cdfa-b0ce-4ed4-909d-9d1af2a5a879"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.068597 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81568895-7de5-48b6-a4c0-6601ca0aa244-kube-api-access-lsfr9" (OuterVolumeSpecName: "kube-api-access-lsfr9") pod "81568895-7de5-48b6-a4c0-6601ca0aa244" (UID: "81568895-7de5-48b6-a4c0-6601ca0aa244"). InnerVolumeSpecName "kube-api-access-lsfr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.071442 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ec5ec3-4a83-4c2a-adde-600a759fcec2-kube-api-access-prtc7" (OuterVolumeSpecName: "kube-api-access-prtc7") pod "e4ec5ec3-4a83-4c2a-adde-600a759fcec2" (UID: "e4ec5ec3-4a83-4c2a-adde-600a759fcec2"). InnerVolumeSpecName "kube-api-access-prtc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.088030 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "48fdca2c-4513-4ee6-ad1b-bf69891f5580" (UID: "48fdca2c-4513-4ee6-ad1b-bf69891f5580"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.088342 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a912cdfa-b0ce-4ed4-909d-9d1af2a5a879" (UID: "a912cdfa-b0ce-4ed4-909d-9d1af2a5a879"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.090615 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a912cdfa-b0ce-4ed4-909d-9d1af2a5a879" (UID: "a912cdfa-b0ce-4ed4-909d-9d1af2a5a879"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.098122 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-config-data" (OuterVolumeSpecName: "config-data") pod "48fdca2c-4513-4ee6-ad1b-bf69891f5580" (UID: "48fdca2c-4513-4ee6-ad1b-bf69891f5580"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.102499 4981 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.107891 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a912cdfa-b0ce-4ed4-909d-9d1af2a5a879" (UID: "a912cdfa-b0ce-4ed4-909d-9d1af2a5a879"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.108018 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-config-data" (OuterVolumeSpecName: "config-data") pod "0aa05f73-e7d2-440b-ab1f-780f23c26272" (UID: "0aa05f73-e7d2-440b-ab1f-780f23c26272"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.110185 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ec5ec3-4a83-4c2a-adde-600a759fcec2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4ec5ec3-4a83-4c2a-adde-600a759fcec2" (UID: "e4ec5ec3-4a83-4c2a-adde-600a759fcec2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.117642 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.117925 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.118030 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.118114 4981 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.118170 4981 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.118222 4981 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.118305 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prtc7\" (UniqueName: \"kubernetes.io/projected/e4ec5ec3-4a83-4c2a-adde-600a759fcec2-kube-api-access-prtc7\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.118376 4981 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.118432 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81568895-7de5-48b6-a4c0-6601ca0aa244-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.118520 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ec5ec3-4a83-4c2a-adde-600a759fcec2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.118577 4981 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.118645 4981 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48fdca2c-4513-4ee6-ad1b-bf69891f5580-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.118722 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsfr9\" (UniqueName: \"kubernetes.io/projected/81568895-7de5-48b6-a4c0-6601ca0aa244-kube-api-access-lsfr9\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.118885 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.118947 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa05f73-e7d2-440b-ab1f-780f23c26272-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.119866 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ec5ec3-4a83-4c2a-adde-600a759fcec2-config-data" (OuterVolumeSpecName: "config-data") pod "e4ec5ec3-4a83-4c2a-adde-600a759fcec2" (UID: "e4ec5ec3-4a83-4c2a-adde-600a759fcec2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: E0227 19:20:39.159970 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a00132f0ac6dbee951194bcad710a6371433227c3b0775c31e258a5544d129d7" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:20:39 crc kubenswrapper[4981]: E0227 19:20:39.161671 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a00132f0ac6dbee951194bcad710a6371433227c3b0775c31e258a5544d129d7" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:20:39 crc kubenswrapper[4981]: E0227 19:20:39.162894 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a00132f0ac6dbee951194bcad710a6371433227c3b0775c31e258a5544d129d7" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 19:20:39 crc kubenswrapper[4981]: E0227 19:20:39.162925 4981 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="caff730d-9210-4de9-b0f1-997e6f5f16c3" containerName="nova-cell1-conductor-conductor" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.228334 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ec5ec3-4a83-4c2a-adde-600a759fcec2-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.243393 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.307083 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.322928 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.324075 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.348857 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.356252 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.356324 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"faa3914e-426b-4791-8199-a7630729baf0","Type":"ContainerDied","Data":"76e7c251baa64099ef1d3e55c24774ded93b6d65f1163a380cd5bba177c13696"} Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.356382 4981 scope.go:117] "RemoveContainer" containerID="1ab68f63ecb4d970b493500d1f84ddfa479978e09bb4f5454405ac3cff3972ba" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.374364 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"48fdca2c-4513-4ee6-ad1b-bf69891f5580","Type":"ContainerDied","Data":"3456dc5752607b5ff4b87c9e8223dcd60d2e5e49dfb93728027024c08bf6c2af"} Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.374475 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.393667 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0aa05f73-e7d2-440b-ab1f-780f23c26272","Type":"ContainerDied","Data":"e2cf4cb69297e4610ad8e0dc0631d68b5d64796315f3d0d1dae971f44173c3b0"} Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.393763 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.410635 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e4ec5ec3-4a83-4c2a-adde-600a759fcec2","Type":"ContainerDied","Data":"ead35c69a11ab5026d86278c5d0d4f931ed000b394d657d0b91589213630b7ef"} Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.410939 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.418518 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fjzpf" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.419866 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fjzpf" event={"ID":"81568895-7de5-48b6-a4c0-6601ca0aa244","Type":"ContainerDied","Data":"bb4fc6f3f4c4d66cf1dc0bb1a21dde33bd74efd8c647d126e4c87d182af002ba"} Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.432070 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-public-tls-certs\") pod \"faa3914e-426b-4791-8199-a7630729baf0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.432126 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c7f2b23-f800-4970-b530-aac7387e0936-config-data\") pod \"0c7f2b23-f800-4970-b530-aac7387e0936\" (UID: \"0c7f2b23-f800-4970-b530-aac7387e0936\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.432235 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqgjt\" (UniqueName: \"kubernetes.io/projected/faa3914e-426b-4791-8199-a7630729baf0-kube-api-access-sqgjt\") pod \"faa3914e-426b-4791-8199-a7630729baf0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.432279 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83a972b-9d9d-407c-a714-821900bc148e-combined-ca-bundle\") pod \"d83a972b-9d9d-407c-a714-821900bc148e\" (UID: \"d83a972b-9d9d-407c-a714-821900bc148e\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.432325 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-combined-ca-bundle\") pod \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\" (UID: \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.432353 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5gt2\" (UniqueName: \"kubernetes.io/projected/0c7f2b23-f800-4970-b530-aac7387e0936-kube-api-access-r5gt2\") pod \"0c7f2b23-f800-4970-b530-aac7387e0936\" (UID: \"0c7f2b23-f800-4970-b530-aac7387e0936\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.432383 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-internal-tls-certs\") pod \"faa3914e-426b-4791-8199-a7630729baf0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.432418 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wklsc\" (UniqueName: \"kubernetes.io/projected/d83a972b-9d9d-407c-a714-821900bc148e-kube-api-access-wklsc\") pod \"d83a972b-9d9d-407c-a714-821900bc148e\" (UID: \"d83a972b-9d9d-407c-a714-821900bc148e\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.432453 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c7f2b23-f800-4970-b530-aac7387e0936-logs\") pod \"0c7f2b23-f800-4970-b530-aac7387e0936\" (UID: \"0c7f2b23-f800-4970-b530-aac7387e0936\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.432481 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs55b\" (UniqueName: \"kubernetes.io/projected/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-kube-api-access-zs55b\") pod \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\" (UID: \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.432514 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-memcached-tls-certs\") pod \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\" (UID: \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.432536 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c7f2b23-f800-4970-b530-aac7387e0936-combined-ca-bundle\") pod \"0c7f2b23-f800-4970-b530-aac7387e0936\" (UID: \"0c7f2b23-f800-4970-b530-aac7387e0936\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.432556 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-config-data\") pod \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\" (UID: \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.432590 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-config-data\") pod \"faa3914e-426b-4791-8199-a7630729baf0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.432639 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c7f2b23-f800-4970-b530-aac7387e0936-nova-metadata-tls-certs\") pod \"0c7f2b23-f800-4970-b530-aac7387e0936\" (UID: \"0c7f2b23-f800-4970-b530-aac7387e0936\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.432669 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83a972b-9d9d-407c-a714-821900bc148e-config-data\") pod \"d83a972b-9d9d-407c-a714-821900bc148e\" (UID: \"d83a972b-9d9d-407c-a714-821900bc148e\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.432691 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faa3914e-426b-4791-8199-a7630729baf0-logs\") pod \"faa3914e-426b-4791-8199-a7630729baf0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.432728 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-kolla-config\") pod \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\" (UID: \"b63f8c5e-ff68-4a07-a2a5-5c3290e21669\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.432753 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-combined-ca-bundle\") pod \"faa3914e-426b-4791-8199-a7630729baf0\" (UID: \"faa3914e-426b-4791-8199-a7630729baf0\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.434162 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d83a972b-9d9d-407c-a714-821900bc148e","Type":"ContainerDied","Data":"f34379611e7efadca5dc02f554802ae5f8ffe6db4e8a3e8ed4ecdedee9c0e2e1"} Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.434285 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.434477 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-config-data" (OuterVolumeSpecName: "config-data") pod "b63f8c5e-ff68-4a07-a2a5-5c3290e21669" (UID: "b63f8c5e-ff68-4a07-a2a5-5c3290e21669"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.438965 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-kube-api-access-zs55b" (OuterVolumeSpecName: "kube-api-access-zs55b") pod "b63f8c5e-ff68-4a07-a2a5-5c3290e21669" (UID: "b63f8c5e-ff68-4a07-a2a5-5c3290e21669"). InnerVolumeSpecName "kube-api-access-zs55b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.439677 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faa3914e-426b-4791-8199-a7630729baf0-logs" (OuterVolumeSpecName: "logs") pod "faa3914e-426b-4791-8199-a7630729baf0" (UID: "faa3914e-426b-4791-8199-a7630729baf0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.442353 4981 scope.go:117] "RemoveContainer" containerID="9b7e4dbef5e7da71bff472adbb75ceb0867028f6b271bd5767677e483d453167" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.442845 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c7f2b23-f800-4970-b530-aac7387e0936-logs" (OuterVolumeSpecName: "logs") pod "0c7f2b23-f800-4970-b530-aac7387e0936" (UID: "0c7f2b23-f800-4970-b530-aac7387e0936"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.443730 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "b63f8c5e-ff68-4a07-a2a5-5c3290e21669" (UID: "b63f8c5e-ff68-4a07-a2a5-5c3290e21669"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.464503 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa3914e-426b-4791-8199-a7630729baf0-kube-api-access-sqgjt" (OuterVolumeSpecName: "kube-api-access-sqgjt") pod "faa3914e-426b-4791-8199-a7630729baf0" (UID: "faa3914e-426b-4791-8199-a7630729baf0"). InnerVolumeSpecName "kube-api-access-sqgjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.464800 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c7f2b23-f800-4970-b530-aac7387e0936-kube-api-access-r5gt2" (OuterVolumeSpecName: "kube-api-access-r5gt2") pod "0c7f2b23-f800-4970-b530-aac7387e0936" (UID: "0c7f2b23-f800-4970-b530-aac7387e0936"). InnerVolumeSpecName "kube-api-access-r5gt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.465599 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d83a972b-9d9d-407c-a714-821900bc148e-kube-api-access-wklsc" (OuterVolumeSpecName: "kube-api-access-wklsc") pod "d83a972b-9d9d-407c-a714-821900bc148e" (UID: "d83a972b-9d9d-407c-a714-821900bc148e"). InnerVolumeSpecName "kube-api-access-wklsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.468142 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76f488968b-rp6r2" event={"ID":"a912cdfa-b0ce-4ed4-909d-9d1af2a5a879","Type":"ContainerDied","Data":"712dd143263e22e384e1bebc6b75baaf7b100e000b17d43340799d46723dda22"} Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.468239 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76f488968b-rp6r2" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.475035 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f8d597b78-f58nv" event={"ID":"6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2","Type":"ContainerDied","Data":"fcaf84ff7ea507da5a5af79d58eee42b4e9ec5f09fd405e17f59181df9902115"} Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.475156 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f8d597b78-f58nv" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.481412 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.481448 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0c7f2b23-f800-4970-b530-aac7387e0936","Type":"ContainerDied","Data":"4cc9008b747e4ec01afb5068786242822aa0e10c218b91d78282d277f02eb97b"} Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.500207 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c7f2b23-f800-4970-b530-aac7387e0936-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c7f2b23-f800-4970-b530-aac7387e0936" (UID: "0c7f2b23-f800-4970-b530-aac7387e0936"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.501819 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b63f8c5e-ff68-4a07-a2a5-5c3290e21669","Type":"ContainerDied","Data":"fcc2b5357d07ea6b8f91a6ff8e503b5a858e7e33b9071aa0bd4d2c33954951a7"} Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.501976 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.516413 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d83a972b-9d9d-407c-a714-821900bc148e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d83a972b-9d9d-407c-a714-821900bc148e" (UID: "d83a972b-9d9d-407c-a714-821900bc148e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.517899 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-config-data" (OuterVolumeSpecName: "config-data") pod "faa3914e-426b-4791-8199-a7630729baf0" (UID: "faa3914e-426b-4791-8199-a7630729baf0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.534782 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqgjt\" (UniqueName: \"kubernetes.io/projected/faa3914e-426b-4791-8199-a7630729baf0-kube-api-access-sqgjt\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.534811 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83a972b-9d9d-407c-a714-821900bc148e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.534821 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5gt2\" (UniqueName: \"kubernetes.io/projected/0c7f2b23-f800-4970-b530-aac7387e0936-kube-api-access-r5gt2\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.534830 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wklsc\" (UniqueName: \"kubernetes.io/projected/d83a972b-9d9d-407c-a714-821900bc148e-kube-api-access-wklsc\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.534839 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c7f2b23-f800-4970-b530-aac7387e0936-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.534851 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs55b\" (UniqueName: \"kubernetes.io/projected/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-kube-api-access-zs55b\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.534861 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.534869 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c7f2b23-f800-4970-b530-aac7387e0936-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.534880 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.534888 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faa3914e-426b-4791-8199-a7630729baf0-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.534897 4981 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.554216 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "faa3914e-426b-4791-8199-a7630729baf0" (UID: "faa3914e-426b-4791-8199-a7630729baf0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.584975 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b63f8c5e-ff68-4a07-a2a5-5c3290e21669" (UID: "b63f8c5e-ff68-4a07-a2a5-5c3290e21669"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.585075 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d83a972b-9d9d-407c-a714-821900bc148e-config-data" (OuterVolumeSpecName: "config-data") pod "d83a972b-9d9d-407c-a714-821900bc148e" (UID: "d83a972b-9d9d-407c-a714-821900bc148e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.604715 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "faa3914e-426b-4791-8199-a7630729baf0" (UID: "faa3914e-426b-4791-8199-a7630729baf0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.610244 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "b63f8c5e-ff68-4a07-a2a5-5c3290e21669" (UID: "b63f8c5e-ff68-4a07-a2a5-5c3290e21669"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.615974 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c7f2b23-f800-4970-b530-aac7387e0936-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0c7f2b23-f800-4970-b530-aac7387e0936" (UID: "0c7f2b23-f800-4970-b530-aac7387e0936"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.620467 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "faa3914e-426b-4791-8199-a7630729baf0" (UID: "faa3914e-426b-4791-8199-a7630729baf0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.622762 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c7f2b23-f800-4970-b530-aac7387e0936-config-data" (OuterVolumeSpecName: "config-data") pod "0c7f2b23-f800-4970-b530-aac7387e0936" (UID: "0c7f2b23-f800-4970-b530-aac7387e0936"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.636456 4981 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.636583 4981 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c7f2b23-f800-4970-b530-aac7387e0936-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.636597 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83a972b-9d9d-407c-a714-821900bc148e-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.636635 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.636646 4981 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.636657 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c7f2b23-f800-4970-b530-aac7387e0936-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.636667 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63f8c5e-ff68-4a07-a2a5-5c3290e21669-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.636675 4981 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/faa3914e-426b-4791-8199-a7630729baf0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.645046 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="223e876e-b201-4728-b355-6a2385e78766" path="/var/lib/kubelet/pods/223e876e-b201-4728-b355-6a2385e78766/volumes" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.645564 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eb78cff-4f39-4b26-8cf4-c1c8f64730ae" path="/var/lib/kubelet/pods/5eb78cff-4f39-4b26-8cf4-c1c8f64730ae/volumes" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.646357 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75dabc26-0258-4ef3-b0c8-04231f2fa5c5" path="/var/lib/kubelet/pods/75dabc26-0258-4ef3-b0c8-04231f2fa5c5/volumes" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.646796 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89937a2b-e16c-4964-a540-5a2f8fe812b7" path="/var/lib/kubelet/pods/89937a2b-e16c-4964-a540-5a2f8fe812b7/volumes" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.647593 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99264d6c-f9e9-4b89-882f-f9024381b3e4" path="/var/lib/kubelet/pods/99264d6c-f9e9-4b89-882f-f9024381b3e4/volumes" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.648489 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21" path="/var/lib/kubelet/pods/a36120e4-6e5f-4d9a-ab05-2dd30aa3ac21/volumes" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.702855 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.729311 4981 scope.go:117] "RemoveContainer" containerID="3a24b6b7046d00eaee078203e6b423a21700f864b03fcfa22beb510090d24c3b" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.770496 4981 scope.go:117] "RemoveContainer" containerID="1cbf1ce682a3eeca1567599c6ff529e2db2d20a4965a962ce5c563a3e4dd58f1" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.798585 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6f8d597b78-f58nv"] Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.815377 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6f8d597b78-f58nv"] Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.829364 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.839904 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.839969 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1bafd9d-a283-406e-900b-3c5d1aae55fe-logs\") pod \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.840104 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-scripts\") pod \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.840153 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-public-tls-certs\") pod \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.840174 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1bafd9d-a283-406e-900b-3c5d1aae55fe-httpd-run\") pod \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.840236 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-combined-ca-bundle\") pod \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.840256 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-config-data\") pod \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.840300 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-476qt\" (UniqueName: \"kubernetes.io/projected/c1bafd9d-a283-406e-900b-3c5d1aae55fe-kube-api-access-476qt\") pod \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\" (UID: \"c1bafd9d-a283-406e-900b-3c5d1aae55fe\") " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.840774 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1bafd9d-a283-406e-900b-3c5d1aae55fe-logs" (OuterVolumeSpecName: "logs") pod "c1bafd9d-a283-406e-900b-3c5d1aae55fe" (UID: "c1bafd9d-a283-406e-900b-3c5d1aae55fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.844799 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "c1bafd9d-a283-406e-900b-3c5d1aae55fe" (UID: "c1bafd9d-a283-406e-900b-3c5d1aae55fe"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.844894 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.849437 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1bafd9d-a283-406e-900b-3c5d1aae55fe-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c1bafd9d-a283-406e-900b-3c5d1aae55fe" (UID: "c1bafd9d-a283-406e-900b-3c5d1aae55fe"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.852443 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-scripts" (OuterVolumeSpecName: "scripts") pod "c1bafd9d-a283-406e-900b-3c5d1aae55fe" (UID: "c1bafd9d-a283-406e-900b-3c5d1aae55fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.856866 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.861768 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1bafd9d-a283-406e-900b-3c5d1aae55fe-kube-api-access-476qt" (OuterVolumeSpecName: "kube-api-access-476qt") pod "c1bafd9d-a283-406e-900b-3c5d1aae55fe" (UID: "c1bafd9d-a283-406e-900b-3c5d1aae55fe"). InnerVolumeSpecName "kube-api-access-476qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.871212 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.873977 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1bafd9d-a283-406e-900b-3c5d1aae55fe" (UID: "c1bafd9d-a283-406e-900b-3c5d1aae55fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.901407 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fjzpf"] Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.903854 4981 scope.go:117] "RemoveContainer" containerID="3795787eaffa6416278d5f620720e96bed3cebe839428ad34ee4a6b1bbcfb5ed" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.914160 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fjzpf"] Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.926071 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-config-data" (OuterVolumeSpecName: "config-data") pod "c1bafd9d-a283-406e-900b-3c5d1aae55fe" (UID: "c1bafd9d-a283-406e-900b-3c5d1aae55fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.942347 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.942382 4981 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1bafd9d-a283-406e-900b-3c5d1aae55fe-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.942394 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.942406 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.942415 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-476qt\" (UniqueName: \"kubernetes.io/projected/c1bafd9d-a283-406e-900b-3c5d1aae55fe-kube-api-access-476qt\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.942437 4981 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.942446 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1bafd9d-a283-406e-900b-3c5d1aae55fe-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.945945 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c1bafd9d-a283-406e-900b-3c5d1aae55fe" (UID: "c1bafd9d-a283-406e-900b-3c5d1aae55fe"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:39 crc kubenswrapper[4981]: E0227 19:20:39.946076 4981 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 27 19:20:39 crc kubenswrapper[4981]: E0227 19:20:39.946129 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-config-data podName:991e04a2-e14a-4987-a7d8-b7f5db5cb8e3 nodeName:}" failed. No retries permitted until 2026-02-27 19:20:47.946112188 +0000 UTC m=+2147.424893338 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-config-data") pod "rabbitmq-server-0" (UID: "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3") : configmap "rabbitmq-config-data" not found Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.946311 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.952248 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.954226 4981 scope.go:117] "RemoveContainer" containerID="2affc55a229a8b585a8ade5bf43b2c239ea9c89cb121110f24bf358bb120da2a" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.958921 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.967879 4981 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.971264 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.975329 4981 scope.go:117] "RemoveContainer" containerID="9c670261714a51e0bc1dd408854bbb5ff1ed9f5b62828c8c8ece1900fa737f24" Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.981336 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-76f488968b-rp6r2"] Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.988496 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-76f488968b-rp6r2"] Feb 27 19:20:39 crc kubenswrapper[4981]: I0227 19:20:39.995930 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.001493 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.001540 4981 scope.go:117] "RemoveContainer" containerID="3152c46cc349f7837726327bf3254bd69f1fb1bbbe347a8afb428e8a80072528" Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.007557 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.013629 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.020359 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.022021 4981 scope.go:117] "RemoveContainer" containerID="7a49980a63c473dcc320eddd1d497f4ed0bc0d50087c74c6dd29e6e12d599a2d" Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.025606 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.043815 4981 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1bafd9d-a283-406e-900b-3c5d1aae55fe-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.043852 4981 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.052835 4981 scope.go:117] "RemoveContainer" containerID="2bfade5e02f0643fd1bf6ce8f381f0eeac25e9090ba2aefbc0b89a2a24773551" Feb 27 19:20:40 crc kubenswrapper[4981]: E0227 19:20:40.070683 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2383d387853c55d3b03208088009493504f3c3e88fd1ec79f4ffae6e55db5669" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 27 19:20:40 crc kubenswrapper[4981]: E0227 19:20:40.072230 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2383d387853c55d3b03208088009493504f3c3e88fd1ec79f4ffae6e55db5669" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 27 19:20:40 crc kubenswrapper[4981]: E0227 19:20:40.073921 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2383d387853c55d3b03208088009493504f3c3e88fd1ec79f4ffae6e55db5669" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 27 19:20:40 crc kubenswrapper[4981]: E0227 19:20:40.073964 4981 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="918ffa1d-14dc-4215-ad79-e545616bcfc5" containerName="galera" Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.074553 4981 scope.go:117] "RemoveContainer" containerID="dae852a53f7febec558df780ac57acd7d91cce2fba1b3b86d956c36653347faa" Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.094093 4981 scope.go:117] "RemoveContainer" containerID="da47666533c186d6e31e8632cdd467e851243fc49eff7f7fcac48865f970ee5b" Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.119462 4981 scope.go:117] "RemoveContainer" containerID="c360cdd61e3163e8b02b644a2169bebd548e95d9e6f4be8bc924e36168d7c4bd" Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.140080 4981 scope.go:117] "RemoveContainer" containerID="b68e0bd12b334c4ff9b7ac5e9cc423457c71f31b3839b31ec2c6e093fac9743d" Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.174484 4981 scope.go:117] "RemoveContainer" containerID="2831b1fbd633eee20cda167168b129b60e56a04ba92a023a388d553dde52965e" Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.512457 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c1bafd9d-a283-406e-900b-3c5d1aae55fe","Type":"ContainerDied","Data":"5f4fc666f17be0900b72ee1b730d61306746ca2c93508d547a052c97f9e5ac28"} Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.512479 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.512504 4981 scope.go:117] "RemoveContainer" containerID="9252062e4f9078c805d000ada9f14dcf8cf9e94119dfd3eee20804d3a99f7de4" Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.542298 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/keystone-6b879f46f9-hf222" podUID="087da308-30ee-4a17-945a-844baf0cf4b4" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.158:5000/v3\": read tcp 10.217.0.2:46508->10.217.0.158:5000: read: connection reset by peer" Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.542680 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="4e27e8aa-f220-4415-8670-ca9186161dba" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.217:6080/vnc_lite.html\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.550715 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.556167 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 19:20:40 crc kubenswrapper[4981]: I0227 19:20:40.559392 4981 scope.go:117] "RemoveContainer" containerID="c56f4d215954c7c816813a5607ee1845d8bcd2e458593efdcc15c07e8b8dfdc9" Feb 27 19:20:40 crc kubenswrapper[4981]: E0227 19:20:40.653737 4981 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 27 19:20:40 crc kubenswrapper[4981]: E0227 19:20:40.653827 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-config-data podName:f928877c-eaff-4ab4-ae3b-ba6ed721642c nodeName:}" failed. No retries permitted until 2026-02-27 19:20:48.653806787 +0000 UTC m=+2148.132587947 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-config-data") pod "rabbitmq-cell1-server-0" (UID: "f928877c-eaff-4ab4-ae3b-ba6ed721642c") : configmap "rabbitmq-cell1-config-data" not found Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.184627 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.273837 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-credential-keys\") pod \"087da308-30ee-4a17-945a-844baf0cf4b4\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.273915 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-internal-tls-certs\") pod \"087da308-30ee-4a17-945a-844baf0cf4b4\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.273961 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-public-tls-certs\") pod \"087da308-30ee-4a17-945a-844baf0cf4b4\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.274009 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-fernet-keys\") pod \"087da308-30ee-4a17-945a-844baf0cf4b4\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.274070 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-combined-ca-bundle\") pod \"087da308-30ee-4a17-945a-844baf0cf4b4\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.274113 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-config-data\") pod \"087da308-30ee-4a17-945a-844baf0cf4b4\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.274154 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-scripts\") pod \"087da308-30ee-4a17-945a-844baf0cf4b4\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.274177 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ls6z\" (UniqueName: \"kubernetes.io/projected/087da308-30ee-4a17-945a-844baf0cf4b4-kube-api-access-4ls6z\") pod \"087da308-30ee-4a17-945a-844baf0cf4b4\" (UID: \"087da308-30ee-4a17-945a-844baf0cf4b4\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.279305 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/087da308-30ee-4a17-945a-844baf0cf4b4-kube-api-access-4ls6z" (OuterVolumeSpecName: "kube-api-access-4ls6z") pod "087da308-30ee-4a17-945a-844baf0cf4b4" (UID: "087da308-30ee-4a17-945a-844baf0cf4b4"). InnerVolumeSpecName "kube-api-access-4ls6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.279932 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "087da308-30ee-4a17-945a-844baf0cf4b4" (UID: "087da308-30ee-4a17-945a-844baf0cf4b4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.280339 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-scripts" (OuterVolumeSpecName: "scripts") pod "087da308-30ee-4a17-945a-844baf0cf4b4" (UID: "087da308-30ee-4a17-945a-844baf0cf4b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.284301 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "087da308-30ee-4a17-945a-844baf0cf4b4" (UID: "087da308-30ee-4a17-945a-844baf0cf4b4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.315932 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "087da308-30ee-4a17-945a-844baf0cf4b4" (UID: "087da308-30ee-4a17-945a-844baf0cf4b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.325012 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "087da308-30ee-4a17-945a-844baf0cf4b4" (UID: "087da308-30ee-4a17-945a-844baf0cf4b4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.345489 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "087da308-30ee-4a17-945a-844baf0cf4b4" (UID: "087da308-30ee-4a17-945a-844baf0cf4b4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.352915 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-config-data" (OuterVolumeSpecName: "config-data") pod "087da308-30ee-4a17-945a-844baf0cf4b4" (UID: "087da308-30ee-4a17-945a-844baf0cf4b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.359462 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.378035 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.378089 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.378102 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ls6z\" (UniqueName: \"kubernetes.io/projected/087da308-30ee-4a17-945a-844baf0cf4b4-kube-api-access-4ls6z\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.378115 4981 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.378126 4981 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.378136 4981 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.378145 4981 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.378154 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/087da308-30ee-4a17-945a-844baf0cf4b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.478742 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-server-conf\") pod \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.478806 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-tls\") pod \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.478861 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-config-data\") pod \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.478883 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-pod-info\") pod \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.478909 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwswc\" (UniqueName: \"kubernetes.io/projected/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-kube-api-access-nwswc\") pod \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.478945 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-plugins-conf\") pod \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.478966 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-erlang-cookie-secret\") pod \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.479715 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" (UID: "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.479801 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.479853 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-erlang-cookie\") pod \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.479885 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-plugins\") pod \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.479914 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-confd\") pod \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\" (UID: \"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.480331 4981 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.480865 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" (UID: "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.480889 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" (UID: "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.482867 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-kube-api-access-nwswc" (OuterVolumeSpecName: "kube-api-access-nwswc") pod "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" (UID: "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3"). InnerVolumeSpecName "kube-api-access-nwswc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.483210 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" (UID: "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.483758 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" (UID: "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.486487 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-pod-info" (OuterVolumeSpecName: "pod-info") pod "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" (UID: "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.504495 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-config-data" (OuterVolumeSpecName: "config-data") pod "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" (UID: "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.506655 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" (UID: "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.525738 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-server-conf" (OuterVolumeSpecName: "server-conf") pod "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" (UID: "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.542520 4981 generic.go:334] "Generic (PLEG): container finished" podID="918ffa1d-14dc-4215-ad79-e545616bcfc5" containerID="2383d387853c55d3b03208088009493504f3c3e88fd1ec79f4ffae6e55db5669" exitCode=0 Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.542934 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"918ffa1d-14dc-4215-ad79-e545616bcfc5","Type":"ContainerDied","Data":"2383d387853c55d3b03208088009493504f3c3e88fd1ec79f4ffae6e55db5669"} Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.545463 4981 generic.go:334] "Generic (PLEG): container finished" podID="087da308-30ee-4a17-945a-844baf0cf4b4" containerID="59a5b965f6d87fa3d0946ea826d976074eb37a38f37b9809cebb0d08e9b762b3" exitCode=0 Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.545525 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6b879f46f9-hf222" event={"ID":"087da308-30ee-4a17-945a-844baf0cf4b4","Type":"ContainerDied","Data":"59a5b965f6d87fa3d0946ea826d976074eb37a38f37b9809cebb0d08e9b762b3"} Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.545552 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6b879f46f9-hf222" event={"ID":"087da308-30ee-4a17-945a-844baf0cf4b4","Type":"ContainerDied","Data":"04acb6c88d2caf2e0efc687ab30e047e3b37bdcd4d22af1b19197156d2276983"} Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.545568 4981 scope.go:117] "RemoveContainer" containerID="59a5b965f6d87fa3d0946ea826d976074eb37a38f37b9809cebb0d08e9b762b3" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.545684 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6b879f46f9-hf222" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.572438 4981 generic.go:334] "Generic (PLEG): container finished" podID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" containerID="b17d1c158ee9a02d955c961d36f3778f1d0ce99cc8890e879aaabb3483dbe8a8" exitCode=0 Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.572641 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f928877c-eaff-4ab4-ae3b-ba6ed721642c","Type":"ContainerDied","Data":"b17d1c158ee9a02d955c961d36f3778f1d0ce99cc8890e879aaabb3483dbe8a8"} Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.572712 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f928877c-eaff-4ab4-ae3b-ba6ed721642c","Type":"ContainerDied","Data":"6795223a6b8b39373261959a131e16cb35f3b07b0bf0ad21ba4a93e00f66f0ab"} Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.572726 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6795223a6b8b39373261959a131e16cb35f3b07b0bf0ad21ba4a93e00f66f0ab" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.582283 4981 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.582325 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.582335 4981 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-pod-info\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.582344 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwswc\" (UniqueName: \"kubernetes.io/projected/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-kube-api-access-nwswc\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.582352 4981 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.582380 4981 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.582390 4981 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.582399 4981 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.582407 4981 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-server-conf\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.602307 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" (UID: "991e04a2-e14a-4987-a7d8-b7f5db5cb8e3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.604294 4981 generic.go:334] "Generic (PLEG): container finished" podID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" containerID="57f6f61d81b4bb62c7f39a3b1be260072a8b0b4fe66cc915fa2a92ab863c30e7" exitCode=0 Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.604498 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3","Type":"ContainerDied","Data":"57f6f61d81b4bb62c7f39a3b1be260072a8b0b4fe66cc915fa2a92ab863c30e7"} Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.604626 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"991e04a2-e14a-4987-a7d8-b7f5db5cb8e3","Type":"ContainerDied","Data":"8f1ddb97a65a5da0da1fd55dca803bf43ca0ea1e390331a93cd9482174f63c65"} Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.604802 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.625269 4981 generic.go:334] "Generic (PLEG): container finished" podID="e691b557-a141-44b1-a2c7-4ba36af55a15" containerID="05eafe4f692fe809c80310522b3ee1e9042aee13aa572f6f81438b46b0174a5c" exitCode=0 Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.625333 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b6bf89d9-5xrv6" event={"ID":"e691b557-a141-44b1-a2c7-4ba36af55a15","Type":"ContainerDied","Data":"05eafe4f692fe809c80310522b3ee1e9042aee13aa572f6f81438b46b0174a5c"} Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.647978 4981 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.665306 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aa05f73-e7d2-440b-ab1f-780f23c26272" path="/var/lib/kubelet/pods/0aa05f73-e7d2-440b-ab1f-780f23c26272/volumes" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.666313 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c7f2b23-f800-4970-b530-aac7387e0936" path="/var/lib/kubelet/pods/0c7f2b23-f800-4970-b530-aac7387e0936/volumes" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.666873 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48fdca2c-4513-4ee6-ad1b-bf69891f5580" path="/var/lib/kubelet/pods/48fdca2c-4513-4ee6-ad1b-bf69891f5580/volumes" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.669284 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2" path="/var/lib/kubelet/pods/6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2/volumes" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.673310 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81568895-7de5-48b6-a4c0-6601ca0aa244" path="/var/lib/kubelet/pods/81568895-7de5-48b6-a4c0-6601ca0aa244/volumes" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.673676 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a912cdfa-b0ce-4ed4-909d-9d1af2a5a879" path="/var/lib/kubelet/pods/a912cdfa-b0ce-4ed4-909d-9d1af2a5a879/volumes" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.674319 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b63f8c5e-ff68-4a07-a2a5-5c3290e21669" path="/var/lib/kubelet/pods/b63f8c5e-ff68-4a07-a2a5-5c3290e21669/volumes" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.685220 4981 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.685249 4981 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.686471 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1bafd9d-a283-406e-900b-3c5d1aae55fe" path="/var/lib/kubelet/pods/c1bafd9d-a283-406e-900b-3c5d1aae55fe/volumes" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.689652 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d83a972b-9d9d-407c-a714-821900bc148e" path="/var/lib/kubelet/pods/d83a972b-9d9d-407c-a714-821900bc148e/volumes" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.690318 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4ec5ec3-4a83-4c2a-adde-600a759fcec2" path="/var/lib/kubelet/pods/e4ec5ec3-4a83-4c2a-adde-600a759fcec2/volumes" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.690928 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faa3914e-426b-4791-8199-a7630729baf0" path="/var/lib/kubelet/pods/faa3914e-426b-4791-8199-a7630729baf0/volumes" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.696153 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.717402 4981 scope.go:117] "RemoveContainer" containerID="59a5b965f6d87fa3d0946ea826d976074eb37a38f37b9809cebb0d08e9b762b3" Feb 27 19:20:41 crc kubenswrapper[4981]: E0227 19:20:41.720941 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59a5b965f6d87fa3d0946ea826d976074eb37a38f37b9809cebb0d08e9b762b3\": container with ID starting with 59a5b965f6d87fa3d0946ea826d976074eb37a38f37b9809cebb0d08e9b762b3 not found: ID does not exist" containerID="59a5b965f6d87fa3d0946ea826d976074eb37a38f37b9809cebb0d08e9b762b3" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.720984 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59a5b965f6d87fa3d0946ea826d976074eb37a38f37b9809cebb0d08e9b762b3"} err="failed to get container status \"59a5b965f6d87fa3d0946ea826d976074eb37a38f37b9809cebb0d08e9b762b3\": rpc error: code = NotFound desc = could not find container \"59a5b965f6d87fa3d0946ea826d976074eb37a38f37b9809cebb0d08e9b762b3\": container with ID starting with 59a5b965f6d87fa3d0946ea826d976074eb37a38f37b9809cebb0d08e9b762b3 not found: ID does not exist" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.721010 4981 scope.go:117] "RemoveContainer" containerID="57f6f61d81b4bb62c7f39a3b1be260072a8b0b4fe66cc915fa2a92ab863c30e7" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.728189 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.744686 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.761330 4981 scope.go:117] "RemoveContainer" containerID="739bf9a3ac6ea571ec9bca214592f39bd326faa6890041f869de8b117f30fc3d" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.786693 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-plugins-conf\") pod \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.786753 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-erlang-cookie\") pod \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.786809 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-server-conf\") pod \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.786864 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-confd\") pod \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.786913 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkpxm\" (UniqueName: \"kubernetes.io/projected/f928877c-eaff-4ab4-ae3b-ba6ed721642c-kube-api-access-hkpxm\") pod \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.786942 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-config-data\") pod \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.786961 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-tls\") pod \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.786989 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.787005 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-plugins\") pod \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.787026 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f928877c-eaff-4ab4-ae3b-ba6ed721642c-pod-info\") pod \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.787041 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f928877c-eaff-4ab4-ae3b-ba6ed721642c-erlang-cookie-secret\") pod \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\" (UID: \"f928877c-eaff-4ab4-ae3b-ba6ed721642c\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.790932 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f928877c-eaff-4ab4-ae3b-ba6ed721642c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f928877c-eaff-4ab4-ae3b-ba6ed721642c" (UID: "f928877c-eaff-4ab4-ae3b-ba6ed721642c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.792005 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f928877c-eaff-4ab4-ae3b-ba6ed721642c" (UID: "f928877c-eaff-4ab4-ae3b-ba6ed721642c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.794543 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "f928877c-eaff-4ab4-ae3b-ba6ed721642c" (UID: "f928877c-eaff-4ab4-ae3b-ba6ed721642c"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.794842 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f928877c-eaff-4ab4-ae3b-ba6ed721642c-kube-api-access-hkpxm" (OuterVolumeSpecName: "kube-api-access-hkpxm") pod "f928877c-eaff-4ab4-ae3b-ba6ed721642c" (UID: "f928877c-eaff-4ab4-ae3b-ba6ed721642c"). InnerVolumeSpecName "kube-api-access-hkpxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.795142 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f928877c-eaff-4ab4-ae3b-ba6ed721642c" (UID: "f928877c-eaff-4ab4-ae3b-ba6ed721642c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.795685 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f928877c-eaff-4ab4-ae3b-ba6ed721642c" (UID: "f928877c-eaff-4ab4-ae3b-ba6ed721642c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.797731 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f928877c-eaff-4ab4-ae3b-ba6ed721642c" (UID: "f928877c-eaff-4ab4-ae3b-ba6ed721642c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.799389 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.816457 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f928877c-eaff-4ab4-ae3b-ba6ed721642c-pod-info" (OuterVolumeSpecName: "pod-info") pod "f928877c-eaff-4ab4-ae3b-ba6ed721642c" (UID: "f928877c-eaff-4ab4-ae3b-ba6ed721642c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.819691 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-config-data" (OuterVolumeSpecName: "config-data") pod "f928877c-eaff-4ab4-ae3b-ba6ed721642c" (UID: "f928877c-eaff-4ab4-ae3b-ba6ed721642c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.820217 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6b879f46f9-hf222"] Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.828676 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6b879f46f9-hf222"] Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.856717 4981 scope.go:117] "RemoveContainer" containerID="57f6f61d81b4bb62c7f39a3b1be260072a8b0b4fe66cc915fa2a92ab863c30e7" Feb 27 19:20:41 crc kubenswrapper[4981]: E0227 19:20:41.857634 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57f6f61d81b4bb62c7f39a3b1be260072a8b0b4fe66cc915fa2a92ab863c30e7\": container with ID starting with 57f6f61d81b4bb62c7f39a3b1be260072a8b0b4fe66cc915fa2a92ab863c30e7 not found: ID does not exist" containerID="57f6f61d81b4bb62c7f39a3b1be260072a8b0b4fe66cc915fa2a92ab863c30e7" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.858189 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f6f61d81b4bb62c7f39a3b1be260072a8b0b4fe66cc915fa2a92ab863c30e7"} err="failed to get container status \"57f6f61d81b4bb62c7f39a3b1be260072a8b0b4fe66cc915fa2a92ab863c30e7\": rpc error: code = NotFound desc = could not find container \"57f6f61d81b4bb62c7f39a3b1be260072a8b0b4fe66cc915fa2a92ab863c30e7\": container with ID starting with 57f6f61d81b4bb62c7f39a3b1be260072a8b0b4fe66cc915fa2a92ab863c30e7 not found: ID does not exist" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.858234 4981 scope.go:117] "RemoveContainer" containerID="739bf9a3ac6ea571ec9bca214592f39bd326faa6890041f869de8b117f30fc3d" Feb 27 19:20:41 crc kubenswrapper[4981]: E0227 19:20:41.859846 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"739bf9a3ac6ea571ec9bca214592f39bd326faa6890041f869de8b117f30fc3d\": container with ID starting with 739bf9a3ac6ea571ec9bca214592f39bd326faa6890041f869de8b117f30fc3d not found: ID does not exist" containerID="739bf9a3ac6ea571ec9bca214592f39bd326faa6890041f869de8b117f30fc3d" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.859882 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"739bf9a3ac6ea571ec9bca214592f39bd326faa6890041f869de8b117f30fc3d"} err="failed to get container status \"739bf9a3ac6ea571ec9bca214592f39bd326faa6890041f869de8b117f30fc3d\": rpc error: code = NotFound desc = could not find container \"739bf9a3ac6ea571ec9bca214592f39bd326faa6890041f869de8b117f30fc3d\": container with ID starting with 739bf9a3ac6ea571ec9bca214592f39bd326faa6890041f869de8b117f30fc3d not found: ID does not exist" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.864720 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-server-conf" (OuterVolumeSpecName: "server-conf") pod "f928877c-eaff-4ab4-ae3b-ba6ed721642c" (UID: "f928877c-eaff-4ab4-ae3b-ba6ed721642c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.888669 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/918ffa1d-14dc-4215-ad79-e545616bcfc5-galera-tls-certs\") pod \"918ffa1d-14dc-4215-ad79-e545616bcfc5\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.888737 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/918ffa1d-14dc-4215-ad79-e545616bcfc5-kolla-config\") pod \"918ffa1d-14dc-4215-ad79-e545616bcfc5\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.888807 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/918ffa1d-14dc-4215-ad79-e545616bcfc5-config-data-default\") pod \"918ffa1d-14dc-4215-ad79-e545616bcfc5\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.888833 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918ffa1d-14dc-4215-ad79-e545616bcfc5-combined-ca-bundle\") pod \"918ffa1d-14dc-4215-ad79-e545616bcfc5\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.888873 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"918ffa1d-14dc-4215-ad79-e545616bcfc5\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.888916 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/918ffa1d-14dc-4215-ad79-e545616bcfc5-config-data-generated\") pod \"918ffa1d-14dc-4215-ad79-e545616bcfc5\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.889001 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m94vw\" (UniqueName: \"kubernetes.io/projected/918ffa1d-14dc-4215-ad79-e545616bcfc5-kube-api-access-m94vw\") pod \"918ffa1d-14dc-4215-ad79-e545616bcfc5\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.889113 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/918ffa1d-14dc-4215-ad79-e545616bcfc5-operator-scripts\") pod \"918ffa1d-14dc-4215-ad79-e545616bcfc5\" (UID: \"918ffa1d-14dc-4215-ad79-e545616bcfc5\") " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.889599 4981 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.889616 4981 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.889631 4981 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-server-conf\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.889641 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkpxm\" (UniqueName: \"kubernetes.io/projected/f928877c-eaff-4ab4-ae3b-ba6ed721642c-kube-api-access-hkpxm\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.889651 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f928877c-eaff-4ab4-ae3b-ba6ed721642c-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.889662 4981 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.889686 4981 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.889699 4981 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.889711 4981 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f928877c-eaff-4ab4-ae3b-ba6ed721642c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.889721 4981 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f928877c-eaff-4ab4-ae3b-ba6ed721642c-pod-info\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.891321 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/918ffa1d-14dc-4215-ad79-e545616bcfc5-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "918ffa1d-14dc-4215-ad79-e545616bcfc5" (UID: "918ffa1d-14dc-4215-ad79-e545616bcfc5"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.892453 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/918ffa1d-14dc-4215-ad79-e545616bcfc5-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "918ffa1d-14dc-4215-ad79-e545616bcfc5" (UID: "918ffa1d-14dc-4215-ad79-e545616bcfc5"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.892645 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/918ffa1d-14dc-4215-ad79-e545616bcfc5-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "918ffa1d-14dc-4215-ad79-e545616bcfc5" (UID: "918ffa1d-14dc-4215-ad79-e545616bcfc5"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.894532 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/918ffa1d-14dc-4215-ad79-e545616bcfc5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "918ffa1d-14dc-4215-ad79-e545616bcfc5" (UID: "918ffa1d-14dc-4215-ad79-e545616bcfc5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.911107 4981 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.917560 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/918ffa1d-14dc-4215-ad79-e545616bcfc5-kube-api-access-m94vw" (OuterVolumeSpecName: "kube-api-access-m94vw") pod "918ffa1d-14dc-4215-ad79-e545616bcfc5" (UID: "918ffa1d-14dc-4215-ad79-e545616bcfc5"). InnerVolumeSpecName "kube-api-access-m94vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.924759 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "mysql-db") pod "918ffa1d-14dc-4215-ad79-e545616bcfc5" (UID: "918ffa1d-14dc-4215-ad79-e545616bcfc5"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.925046 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918ffa1d-14dc-4215-ad79-e545616bcfc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "918ffa1d-14dc-4215-ad79-e545616bcfc5" (UID: "918ffa1d-14dc-4215-ad79-e545616bcfc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.950722 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f928877c-eaff-4ab4-ae3b-ba6ed721642c" (UID: "f928877c-eaff-4ab4-ae3b-ba6ed721642c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.956894 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918ffa1d-14dc-4215-ad79-e545616bcfc5-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "918ffa1d-14dc-4215-ad79-e545616bcfc5" (UID: "918ffa1d-14dc-4215-ad79-e545616bcfc5"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.991357 4981 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/918ffa1d-14dc-4215-ad79-e545616bcfc5-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.991400 4981 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/918ffa1d-14dc-4215-ad79-e545616bcfc5-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.991416 4981 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.991428 4981 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/918ffa1d-14dc-4215-ad79-e545616bcfc5-config-data-default\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.991441 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/918ffa1d-14dc-4215-ad79-e545616bcfc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.991477 4981 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.991490 4981 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/918ffa1d-14dc-4215-ad79-e545616bcfc5-config-data-generated\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.991504 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m94vw\" (UniqueName: \"kubernetes.io/projected/918ffa1d-14dc-4215-ad79-e545616bcfc5-kube-api-access-m94vw\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.991516 4981 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f928877c-eaff-4ab4-ae3b-ba6ed721642c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:41 crc kubenswrapper[4981]: I0227 19:20:41.991526 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/918ffa1d-14dc-4215-ad79-e545616bcfc5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.014119 4981 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.092806 4981 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.113606 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.187853 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-649cdc5f7c-t45d9" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.193432 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-internal-tls-certs\") pod \"e691b557-a141-44b1-a2c7-4ba36af55a15\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.193483 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-ovndb-tls-certs\") pod \"e691b557-a141-44b1-a2c7-4ba36af55a15\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.193548 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-combined-ca-bundle\") pod \"e691b557-a141-44b1-a2c7-4ba36af55a15\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.193628 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ksqs\" (UniqueName: \"kubernetes.io/projected/e691b557-a141-44b1-a2c7-4ba36af55a15-kube-api-access-4ksqs\") pod \"e691b557-a141-44b1-a2c7-4ba36af55a15\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.193664 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-httpd-config\") pod \"e691b557-a141-44b1-a2c7-4ba36af55a15\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.193766 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-config\") pod \"e691b557-a141-44b1-a2c7-4ba36af55a15\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.193791 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-public-tls-certs\") pod \"e691b557-a141-44b1-a2c7-4ba36af55a15\" (UID: \"e691b557-a141-44b1-a2c7-4ba36af55a15\") " Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.203881 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e691b557-a141-44b1-a2c7-4ba36af55a15-kube-api-access-4ksqs" (OuterVolumeSpecName: "kube-api-access-4ksqs") pod "e691b557-a141-44b1-a2c7-4ba36af55a15" (UID: "e691b557-a141-44b1-a2c7-4ba36af55a15"). InnerVolumeSpecName "kube-api-access-4ksqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.204501 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e691b557-a141-44b1-a2c7-4ba36af55a15" (UID: "e691b557-a141-44b1-a2c7-4ba36af55a15"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.267708 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e691b557-a141-44b1-a2c7-4ba36af55a15" (UID: "e691b557-a141-44b1-a2c7-4ba36af55a15"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.267810 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e691b557-a141-44b1-a2c7-4ba36af55a15" (UID: "e691b557-a141-44b1-a2c7-4ba36af55a15"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.267854 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-config" (OuterVolumeSpecName: "config") pod "e691b557-a141-44b1-a2c7-4ba36af55a15" (UID: "e691b557-a141-44b1-a2c7-4ba36af55a15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.275886 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e691b557-a141-44b1-a2c7-4ba36af55a15" (UID: "e691b557-a141-44b1-a2c7-4ba36af55a15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.295962 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b5819ab-18f7-4885-a4b9-a6a3401903a1-config-data\") pod \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\" (UID: \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\") " Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.296102 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b5819ab-18f7-4885-a4b9-a6a3401903a1-config-data-custom\") pod \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\" (UID: \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\") " Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.296196 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5htrg\" (UniqueName: \"kubernetes.io/projected/0b5819ab-18f7-4885-a4b9-a6a3401903a1-kube-api-access-5htrg\") pod \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\" (UID: \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\") " Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.296255 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b5819ab-18f7-4885-a4b9-a6a3401903a1-combined-ca-bundle\") pod \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\" (UID: \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\") " Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.296278 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b5819ab-18f7-4885-a4b9-a6a3401903a1-logs\") pod \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\" (UID: \"0b5819ab-18f7-4885-a4b9-a6a3401903a1\") " Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.296554 4981 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.296566 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.296575 4981 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.296583 4981 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.296591 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.296599 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ksqs\" (UniqueName: \"kubernetes.io/projected/e691b557-a141-44b1-a2c7-4ba36af55a15-kube-api-access-4ksqs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.297002 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b5819ab-18f7-4885-a4b9-a6a3401903a1-logs" (OuterVolumeSpecName: "logs") pod "0b5819ab-18f7-4885-a4b9-a6a3401903a1" (UID: "0b5819ab-18f7-4885-a4b9-a6a3401903a1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.300242 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b5819ab-18f7-4885-a4b9-a6a3401903a1-kube-api-access-5htrg" (OuterVolumeSpecName: "kube-api-access-5htrg") pod "0b5819ab-18f7-4885-a4b9-a6a3401903a1" (UID: "0b5819ab-18f7-4885-a4b9-a6a3401903a1"). InnerVolumeSpecName "kube-api-access-5htrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.302518 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b5819ab-18f7-4885-a4b9-a6a3401903a1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0b5819ab-18f7-4885-a4b9-a6a3401903a1" (UID: "0b5819ab-18f7-4885-a4b9-a6a3401903a1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.306829 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e691b557-a141-44b1-a2c7-4ba36af55a15" (UID: "e691b557-a141-44b1-a2c7-4ba36af55a15"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.323821 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b5819ab-18f7-4885-a4b9-a6a3401903a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b5819ab-18f7-4885-a4b9-a6a3401903a1" (UID: "0b5819ab-18f7-4885-a4b9-a6a3401903a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.356311 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b5819ab-18f7-4885-a4b9-a6a3401903a1-config-data" (OuterVolumeSpecName: "config-data") pod "0b5819ab-18f7-4885-a4b9-a6a3401903a1" (UID: "0b5819ab-18f7-4885-a4b9-a6a3401903a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.398437 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5htrg\" (UniqueName: \"kubernetes.io/projected/0b5819ab-18f7-4885-a4b9-a6a3401903a1-kube-api-access-5htrg\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.398477 4981 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e691b557-a141-44b1-a2c7-4ba36af55a15-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.398492 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b5819ab-18f7-4885-a4b9-a6a3401903a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.398509 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b5819ab-18f7-4885-a4b9-a6a3401903a1-logs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.398522 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b5819ab-18f7-4885-a4b9-a6a3401903a1-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.398537 4981 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0b5819ab-18f7-4885-a4b9-a6a3401903a1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.605850 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.644000 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"918ffa1d-14dc-4215-ad79-e545616bcfc5","Type":"ContainerDied","Data":"e157beb2c9fabc29967090c648f9f4962c3a2d1851fb9c578abff83008b81460"} Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.644129 4981 scope.go:117] "RemoveContainer" containerID="2383d387853c55d3b03208088009493504f3c3e88fd1ec79f4ffae6e55db5669" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.644230 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.656262 4981 generic.go:334] "Generic (PLEG): container finished" podID="8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285" containerID="5504fe172b1dab98409d133dc9ed246af0545979ab9f8635024b3da89221f235" exitCode=0 Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.656333 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285","Type":"ContainerDied","Data":"5504fe172b1dab98409d133dc9ed246af0545979ab9f8635024b3da89221f235"} Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.656361 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285","Type":"ContainerDied","Data":"54ee163c5f54dfe8276af142dd6ae6adc80234a4c4ef6b8e77e8ee533dce1de1"} Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.656416 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.662782 4981 generic.go:334] "Generic (PLEG): container finished" podID="caff730d-9210-4de9-b0f1-997e6f5f16c3" containerID="a00132f0ac6dbee951194bcad710a6371433227c3b0775c31e258a5544d129d7" exitCode=0 Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.662860 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"caff730d-9210-4de9-b0f1-997e6f5f16c3","Type":"ContainerDied","Data":"a00132f0ac6dbee951194bcad710a6371433227c3b0775c31e258a5544d129d7"} Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.670537 4981 generic.go:334] "Generic (PLEG): container finished" podID="0b5819ab-18f7-4885-a4b9-a6a3401903a1" containerID="ca32bbbc043bf5c3d443cd95726f4e60c54e238c98114190773e4f7cf04378fe" exitCode=0 Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.670799 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-649cdc5f7c-t45d9" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.670983 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-649cdc5f7c-t45d9" event={"ID":"0b5819ab-18f7-4885-a4b9-a6a3401903a1","Type":"ContainerDied","Data":"ca32bbbc043bf5c3d443cd95726f4e60c54e238c98114190773e4f7cf04378fe"} Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.671078 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-649cdc5f7c-t45d9" event={"ID":"0b5819ab-18f7-4885-a4b9-a6a3401903a1","Type":"ContainerDied","Data":"385b4de6e9976559662451b00686810be84c29adae75cd4fa55e324c82417251"} Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.679291 4981 scope.go:117] "RemoveContainer" containerID="fa18ada0fdd4fbd7e4904a65bca4de4d6dfbc3eb64c989b47c9399379c99d8be" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.684767 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.685940 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b6bf89d9-5xrv6" event={"ID":"e691b557-a141-44b1-a2c7-4ba36af55a15","Type":"ContainerDied","Data":"12696eef6d50d712a7f82b80de1f34c1316cd108dc41496c8e440faf453f4db1"} Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.685998 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b6bf89d9-5xrv6" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.688779 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.694375 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.702591 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-scripts\") pod \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.702622 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-etc-machine-id\") pod \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.702682 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-config-data\") pod \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.702754 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-config-data-custom\") pod \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.702798 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmdgd\" (UniqueName: \"kubernetes.io/projected/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-kube-api-access-bmdgd\") pod \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.702838 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-combined-ca-bundle\") pod \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\" (UID: \"8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285\") " Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.704262 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285" (UID: "8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.709770 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285" (UID: "8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.723500 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-scripts" (OuterVolumeSpecName: "scripts") pod "8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285" (UID: "8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.723570 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-kube-api-access-bmdgd" (OuterVolumeSpecName: "kube-api-access-bmdgd") pod "8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285" (UID: "8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285"). InnerVolumeSpecName "kube-api-access-bmdgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.752328 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285" (UID: "8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.807741 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-config-data" (OuterVolumeSpecName: "config-data") pod "8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285" (UID: "8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.815333 4981 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.815373 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.815386 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.815395 4981 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.818194 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmdgd\" (UniqueName: \"kubernetes.io/projected/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-kube-api-access-bmdgd\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.818236 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.861264 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.872379 4981 scope.go:117] "RemoveContainer" containerID="ca40e5e300e579035592e5f61ba56d1c13272badf8162c9e4acf2f73e36e387f" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.876871 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-649cdc5f7c-t45d9"] Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.894224 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-649cdc5f7c-t45d9"] Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.902162 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5b6bf89d9-5xrv6"] Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.907331 4981 scope.go:117] "RemoveContainer" containerID="5504fe172b1dab98409d133dc9ed246af0545979ab9f8635024b3da89221f235" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.942656 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5b6bf89d9-5xrv6"] Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.955226 4981 scope.go:117] "RemoveContainer" containerID="ca40e5e300e579035592e5f61ba56d1c13272badf8162c9e4acf2f73e36e387f" Feb 27 19:20:42 crc kubenswrapper[4981]: E0227 19:20:42.956259 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca40e5e300e579035592e5f61ba56d1c13272badf8162c9e4acf2f73e36e387f\": container with ID starting with ca40e5e300e579035592e5f61ba56d1c13272badf8162c9e4acf2f73e36e387f not found: ID does not exist" containerID="ca40e5e300e579035592e5f61ba56d1c13272badf8162c9e4acf2f73e36e387f" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.956294 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca40e5e300e579035592e5f61ba56d1c13272badf8162c9e4acf2f73e36e387f"} err="failed to get container status \"ca40e5e300e579035592e5f61ba56d1c13272badf8162c9e4acf2f73e36e387f\": rpc error: code = NotFound desc = could not find container \"ca40e5e300e579035592e5f61ba56d1c13272badf8162c9e4acf2f73e36e387f\": container with ID starting with ca40e5e300e579035592e5f61ba56d1c13272badf8162c9e4acf2f73e36e387f not found: ID does not exist" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.956323 4981 scope.go:117] "RemoveContainer" containerID="5504fe172b1dab98409d133dc9ed246af0545979ab9f8635024b3da89221f235" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.960126 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 19:20:42 crc kubenswrapper[4981]: E0227 19:20:42.960350 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5504fe172b1dab98409d133dc9ed246af0545979ab9f8635024b3da89221f235\": container with ID starting with 5504fe172b1dab98409d133dc9ed246af0545979ab9f8635024b3da89221f235 not found: ID does not exist" containerID="5504fe172b1dab98409d133dc9ed246af0545979ab9f8635024b3da89221f235" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.960398 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5504fe172b1dab98409d133dc9ed246af0545979ab9f8635024b3da89221f235"} err="failed to get container status \"5504fe172b1dab98409d133dc9ed246af0545979ab9f8635024b3da89221f235\": rpc error: code = NotFound desc = could not find container \"5504fe172b1dab98409d133dc9ed246af0545979ab9f8635024b3da89221f235\": container with ID starting with 5504fe172b1dab98409d133dc9ed246af0545979ab9f8635024b3da89221f235 not found: ID does not exist" Feb 27 19:20:42 crc kubenswrapper[4981]: I0227 19:20:42.960427 4981 scope.go:117] "RemoveContainer" containerID="ca32bbbc043bf5c3d443cd95726f4e60c54e238c98114190773e4f7cf04378fe" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.000226 4981 scope.go:117] "RemoveContainer" containerID="e2aebd1ae7faac4c52f84543c2d526e5fb6068d8aa4b2db54d588a78b114286f" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.002337 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.017629 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.020728 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caff730d-9210-4de9-b0f1-997e6f5f16c3-config-data\") pod \"caff730d-9210-4de9-b0f1-997e6f5f16c3\" (UID: \"caff730d-9210-4de9-b0f1-997e6f5f16c3\") " Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.020806 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caff730d-9210-4de9-b0f1-997e6f5f16c3-combined-ca-bundle\") pod \"caff730d-9210-4de9-b0f1-997e6f5f16c3\" (UID: \"caff730d-9210-4de9-b0f1-997e6f5f16c3\") " Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.020943 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pdlf\" (UniqueName: \"kubernetes.io/projected/caff730d-9210-4de9-b0f1-997e6f5f16c3-kube-api-access-5pdlf\") pod \"caff730d-9210-4de9-b0f1-997e6f5f16c3\" (UID: \"caff730d-9210-4de9-b0f1-997e6f5f16c3\") " Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.023616 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.024323 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caff730d-9210-4de9-b0f1-997e6f5f16c3-kube-api-access-5pdlf" (OuterVolumeSpecName: "kube-api-access-5pdlf") pod "caff730d-9210-4de9-b0f1-997e6f5f16c3" (UID: "caff730d-9210-4de9-b0f1-997e6f5f16c3"). InnerVolumeSpecName "kube-api-access-5pdlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.030408 4981 scope.go:117] "RemoveContainer" containerID="ca32bbbc043bf5c3d443cd95726f4e60c54e238c98114190773e4f7cf04378fe" Feb 27 19:20:43 crc kubenswrapper[4981]: E0227 19:20:43.031620 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca32bbbc043bf5c3d443cd95726f4e60c54e238c98114190773e4f7cf04378fe\": container with ID starting with ca32bbbc043bf5c3d443cd95726f4e60c54e238c98114190773e4f7cf04378fe not found: ID does not exist" containerID="ca32bbbc043bf5c3d443cd95726f4e60c54e238c98114190773e4f7cf04378fe" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.031676 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca32bbbc043bf5c3d443cd95726f4e60c54e238c98114190773e4f7cf04378fe"} err="failed to get container status \"ca32bbbc043bf5c3d443cd95726f4e60c54e238c98114190773e4f7cf04378fe\": rpc error: code = NotFound desc = could not find container \"ca32bbbc043bf5c3d443cd95726f4e60c54e238c98114190773e4f7cf04378fe\": container with ID starting with ca32bbbc043bf5c3d443cd95726f4e60c54e238c98114190773e4f7cf04378fe not found: ID does not exist" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.031710 4981 scope.go:117] "RemoveContainer" containerID="e2aebd1ae7faac4c52f84543c2d526e5fb6068d8aa4b2db54d588a78b114286f" Feb 27 19:20:43 crc kubenswrapper[4981]: E0227 19:20:43.032037 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2aebd1ae7faac4c52f84543c2d526e5fb6068d8aa4b2db54d588a78b114286f\": container with ID starting with e2aebd1ae7faac4c52f84543c2d526e5fb6068d8aa4b2db54d588a78b114286f not found: ID does not exist" containerID="e2aebd1ae7faac4c52f84543c2d526e5fb6068d8aa4b2db54d588a78b114286f" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.032127 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2aebd1ae7faac4c52f84543c2d526e5fb6068d8aa4b2db54d588a78b114286f"} err="failed to get container status \"e2aebd1ae7faac4c52f84543c2d526e5fb6068d8aa4b2db54d588a78b114286f\": rpc error: code = NotFound desc = could not find container \"e2aebd1ae7faac4c52f84543c2d526e5fb6068d8aa4b2db54d588a78b114286f\": container with ID starting with e2aebd1ae7faac4c52f84543c2d526e5fb6068d8aa4b2db54d588a78b114286f not found: ID does not exist" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.032146 4981 scope.go:117] "RemoveContainer" containerID="c280c1755db22cca5a1d60b0780818610aff15154fcb422c9167f6737e22b6d6" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.039577 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caff730d-9210-4de9-b0f1-997e6f5f16c3-config-data" (OuterVolumeSpecName: "config-data") pod "caff730d-9210-4de9-b0f1-997e6f5f16c3" (UID: "caff730d-9210-4de9-b0f1-997e6f5f16c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.040306 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caff730d-9210-4de9-b0f1-997e6f5f16c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "caff730d-9210-4de9-b0f1-997e6f5f16c3" (UID: "caff730d-9210-4de9-b0f1-997e6f5f16c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.051242 4981 scope.go:117] "RemoveContainer" containerID="05eafe4f692fe809c80310522b3ee1e9042aee13aa572f6f81438b46b0174a5c" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.108281 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.122402 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pdlf\" (UniqueName: \"kubernetes.io/projected/caff730d-9210-4de9-b0f1-997e6f5f16c3-kube-api-access-5pdlf\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.122435 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caff730d-9210-4de9-b0f1-997e6f5f16c3-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.122445 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caff730d-9210-4de9-b0f1-997e6f5f16c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:43 crc kubenswrapper[4981]: E0227 19:20:43.147814 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17 is running failed: container process not found" containerID="2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 19:20:43 crc kubenswrapper[4981]: E0227 19:20:43.148500 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17 is running failed: container process not found" containerID="2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 19:20:43 crc kubenswrapper[4981]: E0227 19:20:43.149175 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17 is running failed: container process not found" containerID="2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 19:20:43 crc kubenswrapper[4981]: E0227 19:20:43.149281 4981 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-5xwl7" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovsdb-server" Feb 27 19:20:43 crc kubenswrapper[4981]: E0227 19:20:43.154250 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ee6fb96a4b332552632dd9dc8737353181f1aa6fa1493b16b542b6e241c02773" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 19:20:43 crc kubenswrapper[4981]: E0227 19:20:43.162003 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ee6fb96a4b332552632dd9dc8737353181f1aa6fa1493b16b542b6e241c02773" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 19:20:43 crc kubenswrapper[4981]: E0227 19:20:43.167559 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ee6fb96a4b332552632dd9dc8737353181f1aa6fa1493b16b542b6e241c02773" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 19:20:43 crc kubenswrapper[4981]: E0227 19:20:43.167624 4981 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-5xwl7" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovs-vswitchd" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.223181 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-combined-ca-bundle\") pod \"0d200585-c61d-43f8-a17e-54f695df7dbe\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.223301 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d200585-c61d-43f8-a17e-54f695df7dbe-log-httpd\") pod \"0d200585-c61d-43f8-a17e-54f695df7dbe\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.223327 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-scripts\") pod \"0d200585-c61d-43f8-a17e-54f695df7dbe\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.223382 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-config-data\") pod \"0d200585-c61d-43f8-a17e-54f695df7dbe\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.223416 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-ceilometer-tls-certs\") pod \"0d200585-c61d-43f8-a17e-54f695df7dbe\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.223459 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d200585-c61d-43f8-a17e-54f695df7dbe-run-httpd\") pod \"0d200585-c61d-43f8-a17e-54f695df7dbe\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.223553 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-sg-core-conf-yaml\") pod \"0d200585-c61d-43f8-a17e-54f695df7dbe\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.224731 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d200585-c61d-43f8-a17e-54f695df7dbe-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0d200585-c61d-43f8-a17e-54f695df7dbe" (UID: "0d200585-c61d-43f8-a17e-54f695df7dbe"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.225425 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d200585-c61d-43f8-a17e-54f695df7dbe-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0d200585-c61d-43f8-a17e-54f695df7dbe" (UID: "0d200585-c61d-43f8-a17e-54f695df7dbe"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.223599 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tvfk\" (UniqueName: \"kubernetes.io/projected/0d200585-c61d-43f8-a17e-54f695df7dbe-kube-api-access-8tvfk\") pod \"0d200585-c61d-43f8-a17e-54f695df7dbe\" (UID: \"0d200585-c61d-43f8-a17e-54f695df7dbe\") " Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.226102 4981 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d200585-c61d-43f8-a17e-54f695df7dbe-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.226195 4981 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d200585-c61d-43f8-a17e-54f695df7dbe-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.226579 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-scripts" (OuterVolumeSpecName: "scripts") pod "0d200585-c61d-43f8-a17e-54f695df7dbe" (UID: "0d200585-c61d-43f8-a17e-54f695df7dbe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.228362 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d200585-c61d-43f8-a17e-54f695df7dbe-kube-api-access-8tvfk" (OuterVolumeSpecName: "kube-api-access-8tvfk") pod "0d200585-c61d-43f8-a17e-54f695df7dbe" (UID: "0d200585-c61d-43f8-a17e-54f695df7dbe"). InnerVolumeSpecName "kube-api-access-8tvfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.247388 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0d200585-c61d-43f8-a17e-54f695df7dbe" (UID: "0d200585-c61d-43f8-a17e-54f695df7dbe"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.261196 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0d200585-c61d-43f8-a17e-54f695df7dbe" (UID: "0d200585-c61d-43f8-a17e-54f695df7dbe"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.275949 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d200585-c61d-43f8-a17e-54f695df7dbe" (UID: "0d200585-c61d-43f8-a17e-54f695df7dbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.314404 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-config-data" (OuterVolumeSpecName: "config-data") pod "0d200585-c61d-43f8-a17e-54f695df7dbe" (UID: "0d200585-c61d-43f8-a17e-54f695df7dbe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.327620 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.327830 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.327845 4981 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.327931 4981 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.327942 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tvfk\" (UniqueName: \"kubernetes.io/projected/0d200585-c61d-43f8-a17e-54f695df7dbe-kube-api-access-8tvfk\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.327950 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d200585-c61d-43f8-a17e-54f695df7dbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.660859 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="087da308-30ee-4a17-945a-844baf0cf4b4" path="/var/lib/kubelet/pods/087da308-30ee-4a17-945a-844baf0cf4b4/volumes" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.661499 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b5819ab-18f7-4885-a4b9-a6a3401903a1" path="/var/lib/kubelet/pods/0b5819ab-18f7-4885-a4b9-a6a3401903a1/volumes" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.662044 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285" path="/var/lib/kubelet/pods/8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285/volumes" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.663558 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="918ffa1d-14dc-4215-ad79-e545616bcfc5" path="/var/lib/kubelet/pods/918ffa1d-14dc-4215-ad79-e545616bcfc5/volumes" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.664322 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" path="/var/lib/kubelet/pods/991e04a2-e14a-4987-a7d8-b7f5db5cb8e3/volumes" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.666476 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e691b557-a141-44b1-a2c7-4ba36af55a15" path="/var/lib/kubelet/pods/e691b557-a141-44b1-a2c7-4ba36af55a15/volumes" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.667215 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" path="/var/lib/kubelet/pods/f928877c-eaff-4ab4-ae3b-ba6ed721642c/volumes" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.700817 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.700799 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"caff730d-9210-4de9-b0f1-997e6f5f16c3","Type":"ContainerDied","Data":"5be4e0ee3d2bf6a1d235466cbd5bd8554026f00794bf30f1e9af58276c3a0684"} Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.700922 4981 scope.go:117] "RemoveContainer" containerID="a00132f0ac6dbee951194bcad710a6371433227c3b0775c31e258a5544d129d7" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.708728 4981 generic.go:334] "Generic (PLEG): container finished" podID="0d200585-c61d-43f8-a17e-54f695df7dbe" containerID="52ab7ade36a7bea163dae4153632d761e9bf6f316544d5345a2f4fc82200a997" exitCode=0 Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.708896 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.709260 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d200585-c61d-43f8-a17e-54f695df7dbe","Type":"ContainerDied","Data":"52ab7ade36a7bea163dae4153632d761e9bf6f316544d5345a2f4fc82200a997"} Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.709303 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0d200585-c61d-43f8-a17e-54f695df7dbe","Type":"ContainerDied","Data":"d1671f0d999dc4507c5739146ed8be565b53f6d68cea6c8a8ad6b8115ca05fbd"} Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.725168 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.735563 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.744273 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.746774 4981 scope.go:117] "RemoveContainer" containerID="fae968112d7a204ca91d2a8361567a64886536860a977043c0f6d6e84eeb765b" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.749491 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.765117 4981 scope.go:117] "RemoveContainer" containerID="557acdada7a6927a2b4039b69f2529a1cfea5b22f511fe9433b3df0d998e6ebe" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.781722 4981 scope.go:117] "RemoveContainer" containerID="52ab7ade36a7bea163dae4153632d761e9bf6f316544d5345a2f4fc82200a997" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.797980 4981 scope.go:117] "RemoveContainer" containerID="bda927ae23d6de9d50708df6de11982ad0fda24fdecf895c9e04685dc88ac49b" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.814296 4981 scope.go:117] "RemoveContainer" containerID="fae968112d7a204ca91d2a8361567a64886536860a977043c0f6d6e84eeb765b" Feb 27 19:20:43 crc kubenswrapper[4981]: E0227 19:20:43.814768 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fae968112d7a204ca91d2a8361567a64886536860a977043c0f6d6e84eeb765b\": container with ID starting with fae968112d7a204ca91d2a8361567a64886536860a977043c0f6d6e84eeb765b not found: ID does not exist" containerID="fae968112d7a204ca91d2a8361567a64886536860a977043c0f6d6e84eeb765b" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.814801 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fae968112d7a204ca91d2a8361567a64886536860a977043c0f6d6e84eeb765b"} err="failed to get container status \"fae968112d7a204ca91d2a8361567a64886536860a977043c0f6d6e84eeb765b\": rpc error: code = NotFound desc = could not find container \"fae968112d7a204ca91d2a8361567a64886536860a977043c0f6d6e84eeb765b\": container with ID starting with fae968112d7a204ca91d2a8361567a64886536860a977043c0f6d6e84eeb765b not found: ID does not exist" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.814823 4981 scope.go:117] "RemoveContainer" containerID="557acdada7a6927a2b4039b69f2529a1cfea5b22f511fe9433b3df0d998e6ebe" Feb 27 19:20:43 crc kubenswrapper[4981]: E0227 19:20:43.815461 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"557acdada7a6927a2b4039b69f2529a1cfea5b22f511fe9433b3df0d998e6ebe\": container with ID starting with 557acdada7a6927a2b4039b69f2529a1cfea5b22f511fe9433b3df0d998e6ebe not found: ID does not exist" containerID="557acdada7a6927a2b4039b69f2529a1cfea5b22f511fe9433b3df0d998e6ebe" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.815482 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"557acdada7a6927a2b4039b69f2529a1cfea5b22f511fe9433b3df0d998e6ebe"} err="failed to get container status \"557acdada7a6927a2b4039b69f2529a1cfea5b22f511fe9433b3df0d998e6ebe\": rpc error: code = NotFound desc = could not find container \"557acdada7a6927a2b4039b69f2529a1cfea5b22f511fe9433b3df0d998e6ebe\": container with ID starting with 557acdada7a6927a2b4039b69f2529a1cfea5b22f511fe9433b3df0d998e6ebe not found: ID does not exist" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.815495 4981 scope.go:117] "RemoveContainer" containerID="52ab7ade36a7bea163dae4153632d761e9bf6f316544d5345a2f4fc82200a997" Feb 27 19:20:43 crc kubenswrapper[4981]: E0227 19:20:43.815712 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52ab7ade36a7bea163dae4153632d761e9bf6f316544d5345a2f4fc82200a997\": container with ID starting with 52ab7ade36a7bea163dae4153632d761e9bf6f316544d5345a2f4fc82200a997 not found: ID does not exist" containerID="52ab7ade36a7bea163dae4153632d761e9bf6f316544d5345a2f4fc82200a997" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.815740 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52ab7ade36a7bea163dae4153632d761e9bf6f316544d5345a2f4fc82200a997"} err="failed to get container status \"52ab7ade36a7bea163dae4153632d761e9bf6f316544d5345a2f4fc82200a997\": rpc error: code = NotFound desc = could not find container \"52ab7ade36a7bea163dae4153632d761e9bf6f316544d5345a2f4fc82200a997\": container with ID starting with 52ab7ade36a7bea163dae4153632d761e9bf6f316544d5345a2f4fc82200a997 not found: ID does not exist" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.815760 4981 scope.go:117] "RemoveContainer" containerID="bda927ae23d6de9d50708df6de11982ad0fda24fdecf895c9e04685dc88ac49b" Feb 27 19:20:43 crc kubenswrapper[4981]: E0227 19:20:43.816525 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bda927ae23d6de9d50708df6de11982ad0fda24fdecf895c9e04685dc88ac49b\": container with ID starting with bda927ae23d6de9d50708df6de11982ad0fda24fdecf895c9e04685dc88ac49b not found: ID does not exist" containerID="bda927ae23d6de9d50708df6de11982ad0fda24fdecf895c9e04685dc88ac49b" Feb 27 19:20:43 crc kubenswrapper[4981]: I0227 19:20:43.816560 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bda927ae23d6de9d50708df6de11982ad0fda24fdecf895c9e04685dc88ac49b"} err="failed to get container status \"bda927ae23d6de9d50708df6de11982ad0fda24fdecf895c9e04685dc88ac49b\": rpc error: code = NotFound desc = could not find container \"bda927ae23d6de9d50708df6de11982ad0fda24fdecf895c9e04685dc88ac49b\": container with ID starting with bda927ae23d6de9d50708df6de11982ad0fda24fdecf895c9e04685dc88ac49b not found: ID does not exist" Feb 27 19:20:45 crc kubenswrapper[4981]: I0227 19:20:45.637417 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d200585-c61d-43f8-a17e-54f695df7dbe" path="/var/lib/kubelet/pods/0d200585-c61d-43f8-a17e-54f695df7dbe/volumes" Feb 27 19:20:45 crc kubenswrapper[4981]: I0227 19:20:45.638252 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caff730d-9210-4de9-b0f1-997e6f5f16c3" path="/var/lib/kubelet/pods/caff730d-9210-4de9-b0f1-997e6f5f16c3/volumes" Feb 27 19:20:48 crc kubenswrapper[4981]: E0227 19:20:48.147323 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17 is running failed: container process not found" containerID="2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 19:20:48 crc kubenswrapper[4981]: E0227 19:20:48.147930 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17 is running failed: container process not found" containerID="2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 19:20:48 crc kubenswrapper[4981]: E0227 19:20:48.148267 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17 is running failed: container process not found" containerID="2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 19:20:48 crc kubenswrapper[4981]: E0227 19:20:48.148304 4981 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-5xwl7" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovsdb-server" Feb 27 19:20:48 crc kubenswrapper[4981]: E0227 19:20:48.148629 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ee6fb96a4b332552632dd9dc8737353181f1aa6fa1493b16b542b6e241c02773" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 19:20:48 crc kubenswrapper[4981]: E0227 19:20:48.150212 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ee6fb96a4b332552632dd9dc8737353181f1aa6fa1493b16b542b6e241c02773" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 19:20:48 crc kubenswrapper[4981]: E0227 19:20:48.151668 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ee6fb96a4b332552632dd9dc8737353181f1aa6fa1493b16b542b6e241c02773" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 19:20:48 crc kubenswrapper[4981]: E0227 19:20:48.151724 4981 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-5xwl7" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovs-vswitchd" Feb 27 19:20:50 crc kubenswrapper[4981]: I0227 19:20:50.248871 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:20:50 crc kubenswrapper[4981]: I0227 19:20:50.249194 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:20:50 crc kubenswrapper[4981]: I0227 19:20:50.249253 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 19:20:50 crc kubenswrapper[4981]: I0227 19:20:50.249684 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3eaaac0016632062e717d6fc785b5e6a960c31de063fc4ec9f829edb351d80fe"} pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:20:50 crc kubenswrapper[4981]: I0227 19:20:50.249728 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" containerID="cri-o://3eaaac0016632062e717d6fc785b5e6a960c31de063fc4ec9f829edb351d80fe" gracePeriod=600 Feb 27 19:20:50 crc kubenswrapper[4981]: I0227 19:20:50.786883 4981 generic.go:334] "Generic (PLEG): container finished" podID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerID="3eaaac0016632062e717d6fc785b5e6a960c31de063fc4ec9f829edb351d80fe" exitCode=0 Feb 27 19:20:50 crc kubenswrapper[4981]: I0227 19:20:50.786952 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerDied","Data":"3eaaac0016632062e717d6fc785b5e6a960c31de063fc4ec9f829edb351d80fe"} Feb 27 19:20:50 crc kubenswrapper[4981]: I0227 19:20:50.787276 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerStarted","Data":"d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6"} Feb 27 19:20:50 crc kubenswrapper[4981]: I0227 19:20:50.787296 4981 scope.go:117] "RemoveContainer" containerID="d11e36ea9ee25a3e07cb66e7fb51a647d9aa9fadffcc3ef5ff21819ee2509bba" Feb 27 19:20:53 crc kubenswrapper[4981]: E0227 19:20:53.146817 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17 is running failed: container process not found" containerID="2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 19:20:53 crc kubenswrapper[4981]: E0227 19:20:53.147766 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17 is running failed: container process not found" containerID="2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 19:20:53 crc kubenswrapper[4981]: E0227 19:20:53.148170 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17 is running failed: container process not found" containerID="2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 19:20:53 crc kubenswrapper[4981]: E0227 19:20:53.148208 4981 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-5xwl7" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovsdb-server" Feb 27 19:20:53 crc kubenswrapper[4981]: E0227 19:20:53.148680 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ee6fb96a4b332552632dd9dc8737353181f1aa6fa1493b16b542b6e241c02773" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 19:20:53 crc kubenswrapper[4981]: E0227 19:20:53.150327 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ee6fb96a4b332552632dd9dc8737353181f1aa6fa1493b16b542b6e241c02773" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 19:20:53 crc kubenswrapper[4981]: E0227 19:20:53.151626 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ee6fb96a4b332552632dd9dc8737353181f1aa6fa1493b16b542b6e241c02773" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 19:20:53 crc kubenswrapper[4981]: E0227 19:20:53.151690 4981 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-5xwl7" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovs-vswitchd" Feb 27 19:20:58 crc kubenswrapper[4981]: E0227 19:20:58.147078 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17 is running failed: container process not found" containerID="2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 19:20:58 crc kubenswrapper[4981]: E0227 19:20:58.147962 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17 is running failed: container process not found" containerID="2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 19:20:58 crc kubenswrapper[4981]: E0227 19:20:58.148281 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17 is running failed: container process not found" containerID="2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 19:20:58 crc kubenswrapper[4981]: E0227 19:20:58.148339 4981 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-5xwl7" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovsdb-server" Feb 27 19:20:58 crc kubenswrapper[4981]: E0227 19:20:58.148666 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ee6fb96a4b332552632dd9dc8737353181f1aa6fa1493b16b542b6e241c02773" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 19:20:58 crc kubenswrapper[4981]: E0227 19:20:58.149964 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ee6fb96a4b332552632dd9dc8737353181f1aa6fa1493b16b542b6e241c02773" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 19:20:58 crc kubenswrapper[4981]: E0227 19:20:58.151639 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ee6fb96a4b332552632dd9dc8737353181f1aa6fa1493b16b542b6e241c02773" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 19:20:58 crc kubenswrapper[4981]: E0227 19:20:58.151679 4981 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-5xwl7" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovs-vswitchd" Feb 27 19:21:02 crc kubenswrapper[4981]: I0227 19:21:02.898890 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5xwl7_a1d85462-e999-48fc-8c36-ce8bbe60ed3d/ovs-vswitchd/0.log" Feb 27 19:21:02 crc kubenswrapper[4981]: I0227 19:21:02.900301 4981 generic.go:334] "Generic (PLEG): container finished" podID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerID="ee6fb96a4b332552632dd9dc8737353181f1aa6fa1493b16b542b6e241c02773" exitCode=137 Feb 27 19:21:02 crc kubenswrapper[4981]: I0227 19:21:02.900339 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5xwl7" event={"ID":"a1d85462-e999-48fc-8c36-ce8bbe60ed3d","Type":"ContainerDied","Data":"ee6fb96a4b332552632dd9dc8737353181f1aa6fa1493b16b542b6e241c02773"} Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.110300 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5xwl7_a1d85462-e999-48fc-8c36-ce8bbe60ed3d/ovs-vswitchd/0.log" Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.111580 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.244253 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-scripts\") pod \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.244367 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-var-lib\") pod \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.244444 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-var-lib" (OuterVolumeSpecName: "var-lib") pod "a1d85462-e999-48fc-8c36-ce8bbe60ed3d" (UID: "a1d85462-e999-48fc-8c36-ce8bbe60ed3d"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.244539 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44jlg\" (UniqueName: \"kubernetes.io/projected/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-kube-api-access-44jlg\") pod \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.244613 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-var-log\") pod \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.244681 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-etc-ovs\") pod \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.244749 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-var-run\") pod \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\" (UID: \"a1d85462-e999-48fc-8c36-ce8bbe60ed3d\") " Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.244891 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "a1d85462-e999-48fc-8c36-ce8bbe60ed3d" (UID: "a1d85462-e999-48fc-8c36-ce8bbe60ed3d"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.244929 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-var-log" (OuterVolumeSpecName: "var-log") pod "a1d85462-e999-48fc-8c36-ce8bbe60ed3d" (UID: "a1d85462-e999-48fc-8c36-ce8bbe60ed3d"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.245081 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-var-run" (OuterVolumeSpecName: "var-run") pod "a1d85462-e999-48fc-8c36-ce8bbe60ed3d" (UID: "a1d85462-e999-48fc-8c36-ce8bbe60ed3d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.245242 4981 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-var-log\") on node \"crc\" DevicePath \"\"" Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.245253 4981 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-etc-ovs\") on node \"crc\" DevicePath \"\"" Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.245261 4981 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-var-run\") on node \"crc\" DevicePath \"\"" Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.245269 4981 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-var-lib\") on node \"crc\" DevicePath \"\"" Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.245951 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-scripts" (OuterVolumeSpecName: "scripts") pod "a1d85462-e999-48fc-8c36-ce8bbe60ed3d" (UID: "a1d85462-e999-48fc-8c36-ce8bbe60ed3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.251660 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-kube-api-access-44jlg" (OuterVolumeSpecName: "kube-api-access-44jlg") pod "a1d85462-e999-48fc-8c36-ce8bbe60ed3d" (UID: "a1d85462-e999-48fc-8c36-ce8bbe60ed3d"). InnerVolumeSpecName "kube-api-access-44jlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.346210 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.346246 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44jlg\" (UniqueName: \"kubernetes.io/projected/a1d85462-e999-48fc-8c36-ce8bbe60ed3d-kube-api-access-44jlg\") on node \"crc\" DevicePath \"\"" Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.928878 4981 generic.go:334] "Generic (PLEG): container finished" podID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerID="f0a4445a2b6fa3cf8145c61803d537465f991247ac86d8c79a5cbc0036d344fa" exitCode=137 Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.928921 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerDied","Data":"f0a4445a2b6fa3cf8145c61803d537465f991247ac86d8c79a5cbc0036d344fa"} Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.932525 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5xwl7_a1d85462-e999-48fc-8c36-ce8bbe60ed3d/ovs-vswitchd/0.log" Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.933293 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5xwl7" event={"ID":"a1d85462-e999-48fc-8c36-ce8bbe60ed3d","Type":"ContainerDied","Data":"97d198b6d642657f2ee2d06b0e584af11befc27e2a5cfae696491be27e0596c6"} Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.933341 4981 scope.go:117] "RemoveContainer" containerID="ee6fb96a4b332552632dd9dc8737353181f1aa6fa1493b16b542b6e241c02773" Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.933619 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5xwl7" Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.950618 4981 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podaec1d5c5-b41c-4d8b-9810-04a25a18c1b1"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podaec1d5c5-b41c-4d8b-9810-04a25a18c1b1] : Timed out while waiting for systemd to remove kubepods-besteffort-podaec1d5c5_b41c_4d8b_9810_04a25a18c1b1.slice" Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.968617 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-5xwl7"] Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.974486 4981 scope.go:117] "RemoveContainer" containerID="2b6c875df68a32a5fa4afc730b0540af5fc84be4d374d75be7df04bcfbc98f17" Feb 27 19:21:03 crc kubenswrapper[4981]: I0227 19:21:03.978487 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-5xwl7"] Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.000813 4981 scope.go:117] "RemoveContainer" containerID="787cfc4f63fedaed0585d22d5e64190ea52cda576c2784f0c43fce945146b360" Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.115465 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.261532 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift\") pod \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.261603 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c9c5bb1a-80fb-459f-acb9-e3751c60f684-lock\") pod \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.261668 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjg8v\" (UniqueName: \"kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-kube-api-access-hjg8v\") pod \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.261706 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c9c5bb1a-80fb-459f-acb9-e3751c60f684-cache\") pod \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.261768 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c5bb1a-80fb-459f-acb9-e3751c60f684-combined-ca-bundle\") pod \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.261788 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\" (UID: \"c9c5bb1a-80fb-459f-acb9-e3751c60f684\") " Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.262676 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9c5bb1a-80fb-459f-acb9-e3751c60f684-cache" (OuterVolumeSpecName: "cache") pod "c9c5bb1a-80fb-459f-acb9-e3751c60f684" (UID: "c9c5bb1a-80fb-459f-acb9-e3751c60f684"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.262709 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9c5bb1a-80fb-459f-acb9-e3751c60f684-lock" (OuterVolumeSpecName: "lock") pod "c9c5bb1a-80fb-459f-acb9-e3751c60f684" (UID: "c9c5bb1a-80fb-459f-acb9-e3751c60f684"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.265591 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "swift") pod "c9c5bb1a-80fb-459f-acb9-e3751c60f684" (UID: "c9c5bb1a-80fb-459f-acb9-e3751c60f684"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.266124 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-kube-api-access-hjg8v" (OuterVolumeSpecName: "kube-api-access-hjg8v") pod "c9c5bb1a-80fb-459f-acb9-e3751c60f684" (UID: "c9c5bb1a-80fb-459f-acb9-e3751c60f684"). InnerVolumeSpecName "kube-api-access-hjg8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.266467 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c9c5bb1a-80fb-459f-acb9-e3751c60f684" (UID: "c9c5bb1a-80fb-459f-acb9-e3751c60f684"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.363592 4981 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.363626 4981 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c9c5bb1a-80fb-459f-acb9-e3751c60f684-lock\") on node \"crc\" DevicePath \"\"" Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.363636 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjg8v\" (UniqueName: \"kubernetes.io/projected/c9c5bb1a-80fb-459f-acb9-e3751c60f684-kube-api-access-hjg8v\") on node \"crc\" DevicePath \"\"" Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.363645 4981 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c9c5bb1a-80fb-459f-acb9-e3751c60f684-cache\") on node \"crc\" DevicePath \"\"" Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.363673 4981 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.377923 4981 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.465630 4981 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.486189 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9c5bb1a-80fb-459f-acb9-e3751c60f684-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9c5bb1a-80fb-459f-acb9-e3751c60f684" (UID: "c9c5bb1a-80fb-459f-acb9-e3751c60f684"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.566710 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c5bb1a-80fb-459f-acb9-e3751c60f684-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.948478 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c9c5bb1a-80fb-459f-acb9-e3751c60f684","Type":"ContainerDied","Data":"aef3afc70ca34265bb191fc692e1a0d2b393d895b5f2e44bc8c7d999f7ccce8e"} Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.948529 4981 scope.go:117] "RemoveContainer" containerID="77798546322cfdb767abb826f6d72d37c7c97fa182b47831196724af9d277123" Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.948669 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.990279 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Feb 27 19:21:04 crc kubenswrapper[4981]: I0227 19:21:04.993120 4981 scope.go:117] "RemoveContainer" containerID="93c87ecb8d8bad33d71e9078051a7748cc757e16bcf80e48a23944e5c1b69077" Feb 27 19:21:05 crc kubenswrapper[4981]: I0227 19:21:05.001150 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Feb 27 19:21:05 crc kubenswrapper[4981]: I0227 19:21:05.014446 4981 scope.go:117] "RemoveContainer" containerID="340a6d7be188f87cef0feaea5f958cc9043c49411edd955b9683aab0230bb9ce" Feb 27 19:21:05 crc kubenswrapper[4981]: I0227 19:21:05.044892 4981 scope.go:117] "RemoveContainer" containerID="24a7799c5cd63e35072f81b37d3932a76fad3192143aeadfc8474ce31dd7dd07" Feb 27 19:21:05 crc kubenswrapper[4981]: I0227 19:21:05.060999 4981 scope.go:117] "RemoveContainer" containerID="f0a4445a2b6fa3cf8145c61803d537465f991247ac86d8c79a5cbc0036d344fa" Feb 27 19:21:05 crc kubenswrapper[4981]: I0227 19:21:05.082241 4981 scope.go:117] "RemoveContainer" containerID="ee08f1be3428c964e3a5c4747f6aa00160451c72e3665c691697f802f5a0bff8" Feb 27 19:21:05 crc kubenswrapper[4981]: I0227 19:21:05.103693 4981 scope.go:117] "RemoveContainer" containerID="6429fdd1fd1cd3788a688757b026c7af8c055f3fb7254d239ca6600f69c3448f" Feb 27 19:21:05 crc kubenswrapper[4981]: I0227 19:21:05.130706 4981 scope.go:117] "RemoveContainer" containerID="7dfea33b75db73391310211c5e0efd16be4a0053864fa6f4abfd9bc77f7118f0" Feb 27 19:21:05 crc kubenswrapper[4981]: I0227 19:21:05.149495 4981 scope.go:117] "RemoveContainer" containerID="80c7a986c413669964ba2fa274f8997a3315fbfd2c8ff1d23dbd74c88b68e595" Feb 27 19:21:05 crc kubenswrapper[4981]: I0227 19:21:05.172138 4981 scope.go:117] "RemoveContainer" containerID="bd23d8482fb237875074c0a92ce77c62ec21a9f35c2014202018bbdef7e20697" Feb 27 19:21:05 crc kubenswrapper[4981]: I0227 19:21:05.188632 4981 scope.go:117] "RemoveContainer" containerID="65f6f3c00e9667ac2dc2eaf62c9691a794f16c6916c044f6252dfe67b11c9cec" Feb 27 19:21:05 crc kubenswrapper[4981]: I0227 19:21:05.542723 4981 scope.go:117] "RemoveContainer" containerID="38569ba465d1fbcc944576c382365600b3972a77a5d42e3a33726b72c23be51a" Feb 27 19:21:05 crc kubenswrapper[4981]: I0227 19:21:05.557420 4981 scope.go:117] "RemoveContainer" containerID="bd2133012f7ec8d5b23febc4eae98775150d6779cece3953959dd0ebaafac076" Feb 27 19:21:05 crc kubenswrapper[4981]: I0227 19:21:05.573913 4981 scope.go:117] "RemoveContainer" containerID="9a1a2e131f5761d079c69185c95e394bd577eda00ea0354161ac5ab992f9e3d0" Feb 27 19:21:05 crc kubenswrapper[4981]: I0227 19:21:05.590754 4981 scope.go:117] "RemoveContainer" containerID="e0eca54f11d429374a0eee69171647db11c1192aa00c288bd9e67f3a6f0c0246" Feb 27 19:21:05 crc kubenswrapper[4981]: I0227 19:21:05.638825 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" path="/var/lib/kubelet/pods/a1d85462-e999-48fc-8c36-ce8bbe60ed3d/volumes" Feb 27 19:21:05 crc kubenswrapper[4981]: I0227 19:21:05.639535 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" path="/var/lib/kubelet/pods/c9c5bb1a-80fb-459f-acb9-e3751c60f684/volumes" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.668750 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-848hk"] Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.669581 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918ffa1d-14dc-4215-ad79-e545616bcfc5" containerName="mysql-bootstrap" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.669597 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="918ffa1d-14dc-4215-ad79-e545616bcfc5" containerName="mysql-bootstrap" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.669616 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a912cdfa-b0ce-4ed4-909d-9d1af2a5a879" containerName="barbican-api" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.669624 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a912cdfa-b0ce-4ed4-909d-9d1af2a5a879" containerName="barbican-api" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.669634 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89937a2b-e16c-4964-a540-5a2f8fe812b7" containerName="kube-state-metrics" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.669643 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="89937a2b-e16c-4964-a540-5a2f8fe812b7" containerName="kube-state-metrics" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.669653 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d200585-c61d-43f8-a17e-54f695df7dbe" containerName="proxy-httpd" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.669661 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d200585-c61d-43f8-a17e-54f695df7dbe" containerName="proxy-httpd" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.669673 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="account-server" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.669680 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="account-server" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.669697 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7f2b23-f800-4970-b530-aac7387e0936" containerName="nova-metadata-log" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.669706 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7f2b23-f800-4970-b530-aac7387e0936" containerName="nova-metadata-log" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.669715 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="object-expirer" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.669723 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="object-expirer" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.669738 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83a972b-9d9d-407c-a714-821900bc148e" containerName="nova-scheduler-scheduler" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.669746 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83a972b-9d9d-407c-a714-821900bc148e" containerName="nova-scheduler-scheduler" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.669763 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a912cdfa-b0ce-4ed4-909d-9d1af2a5a879" containerName="barbican-api-log" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.669772 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a912cdfa-b0ce-4ed4-909d-9d1af2a5a879" containerName="barbican-api-log" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.669780 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b5819ab-18f7-4885-a4b9-a6a3401903a1" containerName="barbican-worker" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.669790 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b5819ab-18f7-4885-a4b9-a6a3401903a1" containerName="barbican-worker" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.669806 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovsdb-server-init" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.669816 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovsdb-server-init" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.669827 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="object-auditor" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.669836 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="object-auditor" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.669850 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="swift-recon-cron" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.669858 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="swift-recon-cron" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.669873 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="account-reaper" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.669881 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="account-reaper" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.669896 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e691b557-a141-44b1-a2c7-4ba36af55a15" containerName="neutron-api" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.669906 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e691b557-a141-44b1-a2c7-4ba36af55a15" containerName="neutron-api" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.669920 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="object-replicator" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.669928 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="object-replicator" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.669944 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa3914e-426b-4791-8199-a7630729baf0" containerName="nova-api-api" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.669952 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa3914e-426b-4791-8199-a7630729baf0" containerName="nova-api-api" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.669966 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285" containerName="cinder-scheduler" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.669976 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285" containerName="cinder-scheduler" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.669989 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovsdb-server" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.669997 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovsdb-server" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670010 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="container-updater" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670018 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="container-updater" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670033 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1bafd9d-a283-406e-900b-3c5d1aae55fe" containerName="glance-log" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670040 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1bafd9d-a283-406e-900b-3c5d1aae55fe" containerName="glance-log" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670072 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ec5ec3-4a83-4c2a-adde-600a759fcec2" containerName="nova-cell0-conductor-conductor" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670082 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ec5ec3-4a83-4c2a-adde-600a759fcec2" containerName="nova-cell0-conductor-conductor" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670100 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="087da308-30ee-4a17-945a-844baf0cf4b4" containerName="keystone-api" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670109 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="087da308-30ee-4a17-945a-844baf0cf4b4" containerName="keystone-api" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670123 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d200585-c61d-43f8-a17e-54f695df7dbe" containerName="sg-core" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670131 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d200585-c61d-43f8-a17e-54f695df7dbe" containerName="sg-core" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670142 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d200585-c61d-43f8-a17e-54f695df7dbe" containerName="ceilometer-central-agent" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670151 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d200585-c61d-43f8-a17e-54f695df7dbe" containerName="ceilometer-central-agent" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670161 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa05f73-e7d2-440b-ab1f-780f23c26272" containerName="glance-log" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670170 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa05f73-e7d2-440b-ab1f-780f23c26272" containerName="glance-log" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670178 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="account-replicator" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670186 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="account-replicator" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670200 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d200585-c61d-43f8-a17e-54f695df7dbe" containerName="ceilometer-notification-agent" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670209 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d200585-c61d-43f8-a17e-54f695df7dbe" containerName="ceilometer-notification-agent" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670221 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" containerName="rabbitmq" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670229 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" containerName="rabbitmq" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670238 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa05f73-e7d2-440b-ab1f-780f23c26272" containerName="glance-httpd" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670247 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa05f73-e7d2-440b-ab1f-780f23c26272" containerName="glance-httpd" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670257 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa3914e-426b-4791-8199-a7630729baf0" containerName="nova-api-log" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670265 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa3914e-426b-4791-8199-a7630729baf0" containerName="nova-api-log" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670279 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="container-auditor" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670287 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="container-auditor" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670302 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="account-auditor" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670310 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="account-auditor" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670326 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1bafd9d-a283-406e-900b-3c5d1aae55fe" containerName="glance-httpd" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670335 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1bafd9d-a283-406e-900b-3c5d1aae55fe" containerName="glance-httpd" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670345 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="object-updater" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670353 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="object-updater" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670364 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2" containerName="placement-api" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670372 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2" containerName="placement-api" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670381 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b5819ab-18f7-4885-a4b9-a6a3401903a1" containerName="barbican-worker-log" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670389 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b5819ab-18f7-4885-a4b9-a6a3401903a1" containerName="barbican-worker-log" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670398 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7f2b23-f800-4970-b530-aac7387e0936" containerName="nova-metadata-metadata" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670406 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7f2b23-f800-4970-b530-aac7387e0936" containerName="nova-metadata-metadata" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670417 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918ffa1d-14dc-4215-ad79-e545616bcfc5" containerName="galera" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670425 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="918ffa1d-14dc-4215-ad79-e545616bcfc5" containerName="galera" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670438 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e691b557-a141-44b1-a2c7-4ba36af55a15" containerName="neutron-httpd" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670447 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e691b557-a141-44b1-a2c7-4ba36af55a15" containerName="neutron-httpd" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670460 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285" containerName="probe" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670468 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285" containerName="probe" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670482 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="rsync" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670490 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="rsync" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670502 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48fdca2c-4513-4ee6-ad1b-bf69891f5580" containerName="cinder-api-log" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670510 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="48fdca2c-4513-4ee6-ad1b-bf69891f5580" containerName="cinder-api-log" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670521 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" containerName="setup-container" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670531 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" containerName="setup-container" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670543 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovs-vswitchd" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670551 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovs-vswitchd" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670561 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="container-server" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670569 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="container-server" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670581 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="object-server" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670589 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="object-server" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670597 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b63f8c5e-ff68-4a07-a2a5-5c3290e21669" containerName="memcached" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670605 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b63f8c5e-ff68-4a07-a2a5-5c3290e21669" containerName="memcached" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670621 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="container-replicator" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670629 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="container-replicator" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670641 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" containerName="setup-container" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670650 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" containerName="setup-container" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670660 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48fdca2c-4513-4ee6-ad1b-bf69891f5580" containerName="cinder-api" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670668 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="48fdca2c-4513-4ee6-ad1b-bf69891f5580" containerName="cinder-api" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670679 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2" containerName="placement-log" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670687 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2" containerName="placement-log" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670697 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" containerName="rabbitmq" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670705 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" containerName="rabbitmq" Feb 27 19:21:19 crc kubenswrapper[4981]: E0227 19:21:19.670715 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caff730d-9210-4de9-b0f1-997e6f5f16c3" containerName="nova-cell1-conductor-conductor" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670724 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="caff730d-9210-4de9-b0f1-997e6f5f16c3" containerName="nova-cell1-conductor-conductor" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670907 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="object-updater" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670925 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d200585-c61d-43f8-a17e-54f695df7dbe" containerName="proxy-httpd" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670934 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e691b557-a141-44b1-a2c7-4ba36af55a15" containerName="neutron-httpd" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670942 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b5819ab-18f7-4885-a4b9-a6a3401903a1" containerName="barbican-worker" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670955 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="a912cdfa-b0ce-4ed4-909d-9d1af2a5a879" containerName="barbican-api" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670964 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aa05f73-e7d2-440b-ab1f-780f23c26272" containerName="glance-httpd" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670974 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285" containerName="probe" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670983 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2" containerName="placement-api" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.670994 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c7f2b23-f800-4970-b530-aac7387e0936" containerName="nova-metadata-log" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671007 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="object-auditor" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671021 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="89937a2b-e16c-4964-a540-5a2f8fe812b7" containerName="kube-state-metrics" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671032 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="account-replicator" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671045 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b5819ab-18f7-4885-a4b9-a6a3401903a1" containerName="barbican-worker-log" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671091 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e691b557-a141-44b1-a2c7-4ba36af55a15" containerName="neutron-api" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671104 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d200585-c61d-43f8-a17e-54f695df7dbe" containerName="sg-core" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671115 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa3914e-426b-4791-8199-a7630729baf0" containerName="nova-api-log" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671130 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="087da308-30ee-4a17-945a-844baf0cf4b4" containerName="keystone-api" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671141 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="account-server" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671152 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="container-updater" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671163 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="caff730d-9210-4de9-b0f1-997e6f5f16c3" containerName="nova-cell1-conductor-conductor" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671179 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="account-reaper" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671191 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="object-expirer" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671199 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1bafd9d-a283-406e-900b-3c5d1aae55fe" containerName="glance-httpd" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671213 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aa05f73-e7d2-440b-ab1f-780f23c26272" containerName="glance-log" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671227 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="swift-recon-cron" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671239 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d200585-c61d-43f8-a17e-54f695df7dbe" containerName="ceilometer-central-agent" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671252 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="d83a972b-9d9d-407c-a714-821900bc148e" containerName="nova-scheduler-scheduler" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671262 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovs-vswitchd" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671271 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f928877c-eaff-4ab4-ae3b-ba6ed721642c" containerName="rabbitmq" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671281 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="container-server" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671294 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="b63f8c5e-ff68-4a07-a2a5-5c3290e21669" containerName="memcached" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671306 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="container-auditor" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671318 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="918ffa1d-14dc-4215-ad79-e545616bcfc5" containerName="galera" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671329 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="container-replicator" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671339 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="account-auditor" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671350 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="a912cdfa-b0ce-4ed4-909d-9d1af2a5a879" containerName="barbican-api-log" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671359 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c7f2b23-f800-4970-b530-aac7387e0936" containerName="nova-metadata-metadata" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671373 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="object-replicator" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671385 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d200585-c61d-43f8-a17e-54f695df7dbe" containerName="ceilometer-notification-agent" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671394 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa3914e-426b-4791-8199-a7630729baf0" containerName="nova-api-api" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671407 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="991e04a2-e14a-4987-a7d8-b7f5db5cb8e3" containerName="rabbitmq" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671415 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="object-server" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671424 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1bafd9d-a283-406e-900b-3c5d1aae55fe" containerName="glance-log" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671433 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="48fdca2c-4513-4ee6-ad1b-bf69891f5580" containerName="cinder-api-log" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671445 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1d85462-e999-48fc-8c36-ce8bbe60ed3d" containerName="ovsdb-server" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671456 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a4d1a2a-b7c3-4f40-9f3a-d0e552bd1285" containerName="cinder-scheduler" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671466 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4ec5ec3-4a83-4c2a-adde-600a759fcec2" containerName="nova-cell0-conductor-conductor" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671479 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c5bb1a-80fb-459f-acb9-e3751c60f684" containerName="rsync" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671489 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a76b081-6659-4ff2-a6a1-b0a8b84fb3f2" containerName="placement-log" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.671498 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="48fdca2c-4513-4ee6-ad1b-bf69891f5580" containerName="cinder-api" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.673471 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-848hk" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.688694 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-848hk"] Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.774797 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaddbbf2-2a01-4a51-b80d-13c76453aab8-catalog-content\") pod \"community-operators-848hk\" (UID: \"aaddbbf2-2a01-4a51-b80d-13c76453aab8\") " pod="openshift-marketplace/community-operators-848hk" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.775248 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9kmg\" (UniqueName: \"kubernetes.io/projected/aaddbbf2-2a01-4a51-b80d-13c76453aab8-kube-api-access-x9kmg\") pod \"community-operators-848hk\" (UID: \"aaddbbf2-2a01-4a51-b80d-13c76453aab8\") " pod="openshift-marketplace/community-operators-848hk" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.775394 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaddbbf2-2a01-4a51-b80d-13c76453aab8-utilities\") pod \"community-operators-848hk\" (UID: \"aaddbbf2-2a01-4a51-b80d-13c76453aab8\") " pod="openshift-marketplace/community-operators-848hk" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.876858 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaddbbf2-2a01-4a51-b80d-13c76453aab8-utilities\") pod \"community-operators-848hk\" (UID: \"aaddbbf2-2a01-4a51-b80d-13c76453aab8\") " pod="openshift-marketplace/community-operators-848hk" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.876925 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaddbbf2-2a01-4a51-b80d-13c76453aab8-catalog-content\") pod \"community-operators-848hk\" (UID: \"aaddbbf2-2a01-4a51-b80d-13c76453aab8\") " pod="openshift-marketplace/community-operators-848hk" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.877005 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9kmg\" (UniqueName: \"kubernetes.io/projected/aaddbbf2-2a01-4a51-b80d-13c76453aab8-kube-api-access-x9kmg\") pod \"community-operators-848hk\" (UID: \"aaddbbf2-2a01-4a51-b80d-13c76453aab8\") " pod="openshift-marketplace/community-operators-848hk" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.877480 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaddbbf2-2a01-4a51-b80d-13c76453aab8-catalog-content\") pod \"community-operators-848hk\" (UID: \"aaddbbf2-2a01-4a51-b80d-13c76453aab8\") " pod="openshift-marketplace/community-operators-848hk" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.877708 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaddbbf2-2a01-4a51-b80d-13c76453aab8-utilities\") pod \"community-operators-848hk\" (UID: \"aaddbbf2-2a01-4a51-b80d-13c76453aab8\") " pod="openshift-marketplace/community-operators-848hk" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.910783 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9kmg\" (UniqueName: \"kubernetes.io/projected/aaddbbf2-2a01-4a51-b80d-13c76453aab8-kube-api-access-x9kmg\") pod \"community-operators-848hk\" (UID: \"aaddbbf2-2a01-4a51-b80d-13c76453aab8\") " pod="openshift-marketplace/community-operators-848hk" Feb 27 19:21:19 crc kubenswrapper[4981]: I0227 19:21:19.994340 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-848hk" Feb 27 19:21:20 crc kubenswrapper[4981]: I0227 19:21:20.268392 4981 scope.go:117] "RemoveContainer" containerID="43954ed89f1a5a50cab1e0763369b500e72de40de24f9ee7294e4748d6f94e76" Feb 27 19:21:20 crc kubenswrapper[4981]: I0227 19:21:20.327257 4981 scope.go:117] "RemoveContainer" containerID="4e3e729a2f7c99f11756596ccfffce4dcd67e9c1cabd75f3b556e598bd9ab27b" Feb 27 19:21:20 crc kubenswrapper[4981]: I0227 19:21:20.366537 4981 scope.go:117] "RemoveContainer" containerID="52292a3a2c1906d86541be54cde391b3e2ea44195dfca89da85c7b92391c2d63" Feb 27 19:21:20 crc kubenswrapper[4981]: I0227 19:21:20.394503 4981 scope.go:117] "RemoveContainer" containerID="94f7cad0d48ab4cdb999663cdeab7da040c74451f9b64c26617c577f369d2053" Feb 27 19:21:20 crc kubenswrapper[4981]: I0227 19:21:20.416955 4981 scope.go:117] "RemoveContainer" containerID="364f04cea6048434f5f847011578a18118dcee127078a31d7bd0e8cabfdf8b4b" Feb 27 19:21:20 crc kubenswrapper[4981]: I0227 19:21:20.444291 4981 scope.go:117] "RemoveContainer" containerID="531958d8e14e7d34b3f90789e5e2637a638062c6d362ab894c4bc534b9ce119c" Feb 27 19:21:20 crc kubenswrapper[4981]: I0227 19:21:20.463545 4981 scope.go:117] "RemoveContainer" containerID="5332cf3f5b64f2f3b52b937ec64b6dff04cfa7fdd73b7dc4e33fe5d2c008f675" Feb 27 19:21:20 crc kubenswrapper[4981]: I0227 19:21:20.484320 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-848hk"] Feb 27 19:21:20 crc kubenswrapper[4981]: I0227 19:21:20.502747 4981 scope.go:117] "RemoveContainer" containerID="d378fa26bc8f0c5f0f946f4dfecf68788a807fe6b3c792500d882ea0dd773eb9" Feb 27 19:21:20 crc kubenswrapper[4981]: I0227 19:21:20.534476 4981 scope.go:117] "RemoveContainer" containerID="cce6d6c81c960fa6f31644866b722a77872ba205b67bf733c80144cc6d6e3dfc" Feb 27 19:21:20 crc kubenswrapper[4981]: I0227 19:21:20.557005 4981 scope.go:117] "RemoveContainer" containerID="b17d1c158ee9a02d955c961d36f3778f1d0ce99cc8890e879aaabb3483dbe8a8" Feb 27 19:21:20 crc kubenswrapper[4981]: I0227 19:21:20.575719 4981 scope.go:117] "RemoveContainer" containerID="dd5a437e89f1984f0479ec8286bb8061f357082ffc23b35bd6f382a0898da54c" Feb 27 19:21:20 crc kubenswrapper[4981]: I0227 19:21:20.600654 4981 scope.go:117] "RemoveContainer" containerID="4fed5c77f575f747ac150d5541be3f78f7462e24e36ffd8348183eb7cc147164" Feb 27 19:21:20 crc kubenswrapper[4981]: I0227 19:21:20.634489 4981 scope.go:117] "RemoveContainer" containerID="5bb3f4702c947998436733cc6cc2c2d7567ac9f3091ff61fd1bc793151cba664" Feb 27 19:21:20 crc kubenswrapper[4981]: I0227 19:21:20.682714 4981 scope.go:117] "RemoveContainer" containerID="f332c9c4f6940a8270735e0201344c8a12b47b999c9c7ed16d5e9cbe6c3bf7c5" Feb 27 19:21:20 crc kubenswrapper[4981]: I0227 19:21:20.709947 4981 scope.go:117] "RemoveContainer" containerID="ba25fc01cd3ba9d204a8832a68f3c221b9bf12a26dc747c0f8450c4251fb2747" Feb 27 19:21:20 crc kubenswrapper[4981]: I0227 19:21:20.784454 4981 scope.go:117] "RemoveContainer" containerID="a3c5cca3e8149e88be34c021f45707aad26587070262895945e7ea19c52e2d2b" Feb 27 19:21:20 crc kubenswrapper[4981]: I0227 19:21:20.835489 4981 scope.go:117] "RemoveContainer" containerID="60ea99e8d510f6df63673bce3568154a3b9731d5509747db3b548c69fc6d391a" Feb 27 19:21:20 crc kubenswrapper[4981]: I0227 19:21:20.857496 4981 scope.go:117] "RemoveContainer" containerID="31fade82185f1e83c1d90a9aa653996bc4068bb402a2e3fde43cb5775094559e" Feb 27 19:21:21 crc kubenswrapper[4981]: I0227 19:21:21.090333 4981 generic.go:334] "Generic (PLEG): container finished" podID="aaddbbf2-2a01-4a51-b80d-13c76453aab8" containerID="1a986798bb817085fb2603cf9d063b8bc2dd2d22578690056738f5165272e692" exitCode=0 Feb 27 19:21:21 crc kubenswrapper[4981]: I0227 19:21:21.090385 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-848hk" event={"ID":"aaddbbf2-2a01-4a51-b80d-13c76453aab8","Type":"ContainerDied","Data":"1a986798bb817085fb2603cf9d063b8bc2dd2d22578690056738f5165272e692"} Feb 27 19:21:21 crc kubenswrapper[4981]: I0227 19:21:21.090433 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-848hk" event={"ID":"aaddbbf2-2a01-4a51-b80d-13c76453aab8","Type":"ContainerStarted","Data":"ab1014aba7e4274aad15778f8aab8c18d1b6f8d08ea6efef175bbee24687d9df"} Feb 27 19:21:22 crc kubenswrapper[4981]: I0227 19:21:22.133615 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-848hk" event={"ID":"aaddbbf2-2a01-4a51-b80d-13c76453aab8","Type":"ContainerStarted","Data":"f005b09b33636d9f85d3ccb7f1544c2654dee9faf7aa5701eb1d75a27fccf689"} Feb 27 19:21:23 crc kubenswrapper[4981]: I0227 19:21:23.144452 4981 generic.go:334] "Generic (PLEG): container finished" podID="aaddbbf2-2a01-4a51-b80d-13c76453aab8" containerID="f005b09b33636d9f85d3ccb7f1544c2654dee9faf7aa5701eb1d75a27fccf689" exitCode=0 Feb 27 19:21:23 crc kubenswrapper[4981]: I0227 19:21:23.144503 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-848hk" event={"ID":"aaddbbf2-2a01-4a51-b80d-13c76453aab8","Type":"ContainerDied","Data":"f005b09b33636d9f85d3ccb7f1544c2654dee9faf7aa5701eb1d75a27fccf689"} Feb 27 19:21:24 crc kubenswrapper[4981]: I0227 19:21:24.154790 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-848hk" event={"ID":"aaddbbf2-2a01-4a51-b80d-13c76453aab8","Type":"ContainerStarted","Data":"68be9f11468ae323001a1e7f76674a8136279fda8d6ff126e4ca523f9a4527aa"} Feb 27 19:21:24 crc kubenswrapper[4981]: I0227 19:21:24.173226 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-848hk" podStartSLOduration=2.767443297 podStartE2EDuration="5.173204514s" podCreationTimestamp="2026-02-27 19:21:19 +0000 UTC" firstStartedPulling="2026-02-27 19:21:21.115972638 +0000 UTC m=+2180.594753798" lastFinishedPulling="2026-02-27 19:21:23.521733855 +0000 UTC m=+2183.000515015" observedRunningTime="2026-02-27 19:21:24.171376978 +0000 UTC m=+2183.650158138" watchObservedRunningTime="2026-02-27 19:21:24.173204514 +0000 UTC m=+2183.651985674" Feb 27 19:21:29 crc kubenswrapper[4981]: I0227 19:21:29.995931 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-848hk" Feb 27 19:21:29 crc kubenswrapper[4981]: I0227 19:21:29.996631 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-848hk" Feb 27 19:21:30 crc kubenswrapper[4981]: I0227 19:21:30.049233 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-848hk" Feb 27 19:21:30 crc kubenswrapper[4981]: I0227 19:21:30.242664 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-848hk" Feb 27 19:21:30 crc kubenswrapper[4981]: I0227 19:21:30.293701 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-848hk"] Feb 27 19:21:32 crc kubenswrapper[4981]: I0227 19:21:32.213175 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-848hk" podUID="aaddbbf2-2a01-4a51-b80d-13c76453aab8" containerName="registry-server" containerID="cri-o://68be9f11468ae323001a1e7f76674a8136279fda8d6ff126e4ca523f9a4527aa" gracePeriod=2 Feb 27 19:21:34 crc kubenswrapper[4981]: I0227 19:21:34.235665 4981 generic.go:334] "Generic (PLEG): container finished" podID="aaddbbf2-2a01-4a51-b80d-13c76453aab8" containerID="68be9f11468ae323001a1e7f76674a8136279fda8d6ff126e4ca523f9a4527aa" exitCode=0 Feb 27 19:21:34 crc kubenswrapper[4981]: I0227 19:21:34.235761 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-848hk" event={"ID":"aaddbbf2-2a01-4a51-b80d-13c76453aab8","Type":"ContainerDied","Data":"68be9f11468ae323001a1e7f76674a8136279fda8d6ff126e4ca523f9a4527aa"} Feb 27 19:21:34 crc kubenswrapper[4981]: I0227 19:21:34.599373 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-848hk" Feb 27 19:21:34 crc kubenswrapper[4981]: I0227 19:21:34.701936 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9kmg\" (UniqueName: \"kubernetes.io/projected/aaddbbf2-2a01-4a51-b80d-13c76453aab8-kube-api-access-x9kmg\") pod \"aaddbbf2-2a01-4a51-b80d-13c76453aab8\" (UID: \"aaddbbf2-2a01-4a51-b80d-13c76453aab8\") " Feb 27 19:21:34 crc kubenswrapper[4981]: I0227 19:21:34.702172 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaddbbf2-2a01-4a51-b80d-13c76453aab8-catalog-content\") pod \"aaddbbf2-2a01-4a51-b80d-13c76453aab8\" (UID: \"aaddbbf2-2a01-4a51-b80d-13c76453aab8\") " Feb 27 19:21:34 crc kubenswrapper[4981]: I0227 19:21:34.702328 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaddbbf2-2a01-4a51-b80d-13c76453aab8-utilities\") pod \"aaddbbf2-2a01-4a51-b80d-13c76453aab8\" (UID: \"aaddbbf2-2a01-4a51-b80d-13c76453aab8\") " Feb 27 19:21:34 crc kubenswrapper[4981]: I0227 19:21:34.703419 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaddbbf2-2a01-4a51-b80d-13c76453aab8-utilities" (OuterVolumeSpecName: "utilities") pod "aaddbbf2-2a01-4a51-b80d-13c76453aab8" (UID: "aaddbbf2-2a01-4a51-b80d-13c76453aab8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:21:34 crc kubenswrapper[4981]: I0227 19:21:34.710310 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaddbbf2-2a01-4a51-b80d-13c76453aab8-kube-api-access-x9kmg" (OuterVolumeSpecName: "kube-api-access-x9kmg") pod "aaddbbf2-2a01-4a51-b80d-13c76453aab8" (UID: "aaddbbf2-2a01-4a51-b80d-13c76453aab8"). InnerVolumeSpecName "kube-api-access-x9kmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:21:34 crc kubenswrapper[4981]: I0227 19:21:34.763710 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaddbbf2-2a01-4a51-b80d-13c76453aab8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aaddbbf2-2a01-4a51-b80d-13c76453aab8" (UID: "aaddbbf2-2a01-4a51-b80d-13c76453aab8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:21:34 crc kubenswrapper[4981]: I0227 19:21:34.803905 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9kmg\" (UniqueName: \"kubernetes.io/projected/aaddbbf2-2a01-4a51-b80d-13c76453aab8-kube-api-access-x9kmg\") on node \"crc\" DevicePath \"\"" Feb 27 19:21:34 crc kubenswrapper[4981]: I0227 19:21:34.803941 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaddbbf2-2a01-4a51-b80d-13c76453aab8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:21:34 crc kubenswrapper[4981]: I0227 19:21:34.803954 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaddbbf2-2a01-4a51-b80d-13c76453aab8-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:21:35 crc kubenswrapper[4981]: I0227 19:21:35.250803 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-848hk" event={"ID":"aaddbbf2-2a01-4a51-b80d-13c76453aab8","Type":"ContainerDied","Data":"ab1014aba7e4274aad15778f8aab8c18d1b6f8d08ea6efef175bbee24687d9df"} Feb 27 19:21:35 crc kubenswrapper[4981]: I0227 19:21:35.250855 4981 scope.go:117] "RemoveContainer" containerID="68be9f11468ae323001a1e7f76674a8136279fda8d6ff126e4ca523f9a4527aa" Feb 27 19:21:35 crc kubenswrapper[4981]: I0227 19:21:35.250854 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-848hk" Feb 27 19:21:35 crc kubenswrapper[4981]: I0227 19:21:35.274399 4981 scope.go:117] "RemoveContainer" containerID="f005b09b33636d9f85d3ccb7f1544c2654dee9faf7aa5701eb1d75a27fccf689" Feb 27 19:21:35 crc kubenswrapper[4981]: I0227 19:21:35.296187 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-848hk"] Feb 27 19:21:35 crc kubenswrapper[4981]: I0227 19:21:35.301564 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-848hk"] Feb 27 19:21:35 crc kubenswrapper[4981]: I0227 19:21:35.306935 4981 scope.go:117] "RemoveContainer" containerID="1a986798bb817085fb2603cf9d063b8bc2dd2d22578690056738f5165272e692" Feb 27 19:21:35 crc kubenswrapper[4981]: I0227 19:21:35.640794 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaddbbf2-2a01-4a51-b80d-13c76453aab8" path="/var/lib/kubelet/pods/aaddbbf2-2a01-4a51-b80d-13c76453aab8/volumes" Feb 27 19:22:00 crc kubenswrapper[4981]: I0227 19:22:00.155816 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537002-mc5hq"] Feb 27 19:22:00 crc kubenswrapper[4981]: E0227 19:22:00.156718 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaddbbf2-2a01-4a51-b80d-13c76453aab8" containerName="extract-utilities" Feb 27 19:22:00 crc kubenswrapper[4981]: I0227 19:22:00.156732 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaddbbf2-2a01-4a51-b80d-13c76453aab8" containerName="extract-utilities" Feb 27 19:22:00 crc kubenswrapper[4981]: E0227 19:22:00.156765 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaddbbf2-2a01-4a51-b80d-13c76453aab8" containerName="extract-content" Feb 27 19:22:00 crc kubenswrapper[4981]: I0227 19:22:00.156773 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaddbbf2-2a01-4a51-b80d-13c76453aab8" containerName="extract-content" Feb 27 19:22:00 crc kubenswrapper[4981]: E0227 19:22:00.156786 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaddbbf2-2a01-4a51-b80d-13c76453aab8" containerName="registry-server" Feb 27 19:22:00 crc kubenswrapper[4981]: I0227 19:22:00.156793 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaddbbf2-2a01-4a51-b80d-13c76453aab8" containerName="registry-server" Feb 27 19:22:00 crc kubenswrapper[4981]: I0227 19:22:00.156943 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaddbbf2-2a01-4a51-b80d-13c76453aab8" containerName="registry-server" Feb 27 19:22:00 crc kubenswrapper[4981]: I0227 19:22:00.157453 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537002-mc5hq" Feb 27 19:22:00 crc kubenswrapper[4981]: I0227 19:22:00.160160 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:22:00 crc kubenswrapper[4981]: I0227 19:22:00.160955 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:22:00 crc kubenswrapper[4981]: I0227 19:22:00.161141 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:22:00 crc kubenswrapper[4981]: I0227 19:22:00.166025 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537002-mc5hq"] Feb 27 19:22:00 crc kubenswrapper[4981]: I0227 19:22:00.307917 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htkwl\" (UniqueName: \"kubernetes.io/projected/4992e128-f31d-4501-9e9c-6967330dcaf1-kube-api-access-htkwl\") pod \"auto-csr-approver-29537002-mc5hq\" (UID: \"4992e128-f31d-4501-9e9c-6967330dcaf1\") " pod="openshift-infra/auto-csr-approver-29537002-mc5hq" Feb 27 19:22:00 crc kubenswrapper[4981]: I0227 19:22:00.409284 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htkwl\" (UniqueName: \"kubernetes.io/projected/4992e128-f31d-4501-9e9c-6967330dcaf1-kube-api-access-htkwl\") pod \"auto-csr-approver-29537002-mc5hq\" (UID: \"4992e128-f31d-4501-9e9c-6967330dcaf1\") " pod="openshift-infra/auto-csr-approver-29537002-mc5hq" Feb 27 19:22:00 crc kubenswrapper[4981]: I0227 19:22:00.432023 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htkwl\" (UniqueName: \"kubernetes.io/projected/4992e128-f31d-4501-9e9c-6967330dcaf1-kube-api-access-htkwl\") pod \"auto-csr-approver-29537002-mc5hq\" (UID: \"4992e128-f31d-4501-9e9c-6967330dcaf1\") " pod="openshift-infra/auto-csr-approver-29537002-mc5hq" Feb 27 19:22:00 crc kubenswrapper[4981]: I0227 19:22:00.474415 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537002-mc5hq" Feb 27 19:22:00 crc kubenswrapper[4981]: I0227 19:22:00.881000 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537002-mc5hq"] Feb 27 19:22:01 crc kubenswrapper[4981]: I0227 19:22:01.457777 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537002-mc5hq" event={"ID":"4992e128-f31d-4501-9e9c-6967330dcaf1","Type":"ContainerStarted","Data":"64b3430c20c6c09b7915ea192b3c94b9968f21aa17cb5c82f4a375ac762a1679"} Feb 27 19:22:03 crc kubenswrapper[4981]: I0227 19:22:03.473764 4981 generic.go:334] "Generic (PLEG): container finished" podID="4992e128-f31d-4501-9e9c-6967330dcaf1" containerID="e48ee52687794e937811e73f167a153be43d7d4660895d5a9bbb875ae8d14c1a" exitCode=0 Feb 27 19:22:03 crc kubenswrapper[4981]: I0227 19:22:03.473812 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537002-mc5hq" event={"ID":"4992e128-f31d-4501-9e9c-6967330dcaf1","Type":"ContainerDied","Data":"e48ee52687794e937811e73f167a153be43d7d4660895d5a9bbb875ae8d14c1a"} Feb 27 19:22:04 crc kubenswrapper[4981]: I0227 19:22:04.889728 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537002-mc5hq" Feb 27 19:22:05 crc kubenswrapper[4981]: I0227 19:22:05.088725 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htkwl\" (UniqueName: \"kubernetes.io/projected/4992e128-f31d-4501-9e9c-6967330dcaf1-kube-api-access-htkwl\") pod \"4992e128-f31d-4501-9e9c-6967330dcaf1\" (UID: \"4992e128-f31d-4501-9e9c-6967330dcaf1\") " Feb 27 19:22:05 crc kubenswrapper[4981]: I0227 19:22:05.093723 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4992e128-f31d-4501-9e9c-6967330dcaf1-kube-api-access-htkwl" (OuterVolumeSpecName: "kube-api-access-htkwl") pod "4992e128-f31d-4501-9e9c-6967330dcaf1" (UID: "4992e128-f31d-4501-9e9c-6967330dcaf1"). InnerVolumeSpecName "kube-api-access-htkwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:22:05 crc kubenswrapper[4981]: I0227 19:22:05.190498 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htkwl\" (UniqueName: \"kubernetes.io/projected/4992e128-f31d-4501-9e9c-6967330dcaf1-kube-api-access-htkwl\") on node \"crc\" DevicePath \"\"" Feb 27 19:22:05 crc kubenswrapper[4981]: I0227 19:22:05.490906 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537002-mc5hq" event={"ID":"4992e128-f31d-4501-9e9c-6967330dcaf1","Type":"ContainerDied","Data":"64b3430c20c6c09b7915ea192b3c94b9968f21aa17cb5c82f4a375ac762a1679"} Feb 27 19:22:05 crc kubenswrapper[4981]: I0227 19:22:05.490948 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64b3430c20c6c09b7915ea192b3c94b9968f21aa17cb5c82f4a375ac762a1679" Feb 27 19:22:05 crc kubenswrapper[4981]: I0227 19:22:05.490959 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537002-mc5hq" Feb 27 19:22:05 crc kubenswrapper[4981]: I0227 19:22:05.960462 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536996-zdg5k"] Feb 27 19:22:05 crc kubenswrapper[4981]: I0227 19:22:05.976089 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536996-zdg5k"] Feb 27 19:22:07 crc kubenswrapper[4981]: I0227 19:22:07.637247 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb159f40-08a1-4c27-9aa5-479f30ee1974" path="/var/lib/kubelet/pods/cb159f40-08a1-4c27-9aa5-479f30ee1974/volumes" Feb 27 19:22:21 crc kubenswrapper[4981]: I0227 19:22:21.636885 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zhw8t"] Feb 27 19:22:21 crc kubenswrapper[4981]: E0227 19:22:21.637761 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4992e128-f31d-4501-9e9c-6967330dcaf1" containerName="oc" Feb 27 19:22:21 crc kubenswrapper[4981]: I0227 19:22:21.637778 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="4992e128-f31d-4501-9e9c-6967330dcaf1" containerName="oc" Feb 27 19:22:21 crc kubenswrapper[4981]: I0227 19:22:21.637970 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="4992e128-f31d-4501-9e9c-6967330dcaf1" containerName="oc" Feb 27 19:22:21 crc kubenswrapper[4981]: I0227 19:22:21.639244 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zhw8t" Feb 27 19:22:21 crc kubenswrapper[4981]: I0227 19:22:21.654616 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zhw8t"] Feb 27 19:22:21 crc kubenswrapper[4981]: I0227 19:22:21.690701 4981 scope.go:117] "RemoveContainer" containerID="d541bf917ce5a598e79b6aa38ac4d242ca09683b99f1ccd4a0cff4489d84d2f0" Feb 27 19:22:21 crc kubenswrapper[4981]: I0227 19:22:21.728584 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c04435bc-9195-4209-be9c-1219659aeca5-catalog-content\") pod \"redhat-marketplace-zhw8t\" (UID: \"c04435bc-9195-4209-be9c-1219659aeca5\") " pod="openshift-marketplace/redhat-marketplace-zhw8t" Feb 27 19:22:21 crc kubenswrapper[4981]: I0227 19:22:21.728661 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c04435bc-9195-4209-be9c-1219659aeca5-utilities\") pod \"redhat-marketplace-zhw8t\" (UID: \"c04435bc-9195-4209-be9c-1219659aeca5\") " pod="openshift-marketplace/redhat-marketplace-zhw8t" Feb 27 19:22:21 crc kubenswrapper[4981]: I0227 19:22:21.728782 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc95h\" (UniqueName: \"kubernetes.io/projected/c04435bc-9195-4209-be9c-1219659aeca5-kube-api-access-sc95h\") pod \"redhat-marketplace-zhw8t\" (UID: \"c04435bc-9195-4209-be9c-1219659aeca5\") " pod="openshift-marketplace/redhat-marketplace-zhw8t" Feb 27 19:22:21 crc kubenswrapper[4981]: I0227 19:22:21.830307 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c04435bc-9195-4209-be9c-1219659aeca5-catalog-content\") pod \"redhat-marketplace-zhw8t\" (UID: \"c04435bc-9195-4209-be9c-1219659aeca5\") " pod="openshift-marketplace/redhat-marketplace-zhw8t" Feb 27 19:22:21 crc kubenswrapper[4981]: I0227 19:22:21.830387 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c04435bc-9195-4209-be9c-1219659aeca5-utilities\") pod \"redhat-marketplace-zhw8t\" (UID: \"c04435bc-9195-4209-be9c-1219659aeca5\") " pod="openshift-marketplace/redhat-marketplace-zhw8t" Feb 27 19:22:21 crc kubenswrapper[4981]: I0227 19:22:21.830429 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc95h\" (UniqueName: \"kubernetes.io/projected/c04435bc-9195-4209-be9c-1219659aeca5-kube-api-access-sc95h\") pod \"redhat-marketplace-zhw8t\" (UID: \"c04435bc-9195-4209-be9c-1219659aeca5\") " pod="openshift-marketplace/redhat-marketplace-zhw8t" Feb 27 19:22:21 crc kubenswrapper[4981]: I0227 19:22:21.830923 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c04435bc-9195-4209-be9c-1219659aeca5-catalog-content\") pod \"redhat-marketplace-zhw8t\" (UID: \"c04435bc-9195-4209-be9c-1219659aeca5\") " pod="openshift-marketplace/redhat-marketplace-zhw8t" Feb 27 19:22:21 crc kubenswrapper[4981]: I0227 19:22:21.830968 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c04435bc-9195-4209-be9c-1219659aeca5-utilities\") pod \"redhat-marketplace-zhw8t\" (UID: \"c04435bc-9195-4209-be9c-1219659aeca5\") " pod="openshift-marketplace/redhat-marketplace-zhw8t" Feb 27 19:22:21 crc kubenswrapper[4981]: I0227 19:22:21.851193 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc95h\" (UniqueName: \"kubernetes.io/projected/c04435bc-9195-4209-be9c-1219659aeca5-kube-api-access-sc95h\") pod \"redhat-marketplace-zhw8t\" (UID: \"c04435bc-9195-4209-be9c-1219659aeca5\") " pod="openshift-marketplace/redhat-marketplace-zhw8t" Feb 27 19:22:21 crc kubenswrapper[4981]: I0227 19:22:21.970299 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zhw8t" Feb 27 19:22:22 crc kubenswrapper[4981]: I0227 19:22:22.401696 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zhw8t"] Feb 27 19:22:22 crc kubenswrapper[4981]: I0227 19:22:22.619609 4981 generic.go:334] "Generic (PLEG): container finished" podID="c04435bc-9195-4209-be9c-1219659aeca5" containerID="7decb2727cfc3e3311f23ac50e8bccc29b488478dc26ac34c4ce54f5798e9f54" exitCode=0 Feb 27 19:22:22 crc kubenswrapper[4981]: I0227 19:22:22.619664 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zhw8t" event={"ID":"c04435bc-9195-4209-be9c-1219659aeca5","Type":"ContainerDied","Data":"7decb2727cfc3e3311f23ac50e8bccc29b488478dc26ac34c4ce54f5798e9f54"} Feb 27 19:22:22 crc kubenswrapper[4981]: I0227 19:22:22.619691 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zhw8t" event={"ID":"c04435bc-9195-4209-be9c-1219659aeca5","Type":"ContainerStarted","Data":"8f10d57dd2984767f221a17f75282271a94f075c85e3a58957bef5f30570bd19"} Feb 27 19:22:24 crc kubenswrapper[4981]: I0227 19:22:24.637735 4981 generic.go:334] "Generic (PLEG): container finished" podID="c04435bc-9195-4209-be9c-1219659aeca5" containerID="5281b43b811c2cd864f8ef8ad73db90eacd390d42e278387d61a988a657ebf90" exitCode=0 Feb 27 19:22:24 crc kubenswrapper[4981]: I0227 19:22:24.637785 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zhw8t" event={"ID":"c04435bc-9195-4209-be9c-1219659aeca5","Type":"ContainerDied","Data":"5281b43b811c2cd864f8ef8ad73db90eacd390d42e278387d61a988a657ebf90"} Feb 27 19:22:25 crc kubenswrapper[4981]: I0227 19:22:25.648028 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zhw8t" event={"ID":"c04435bc-9195-4209-be9c-1219659aeca5","Type":"ContainerStarted","Data":"61e179cfcbd9b6ef562efc6aec59d4a7ce708d6aec0ee0a03c9427eaa2c6fbdc"} Feb 27 19:22:25 crc kubenswrapper[4981]: I0227 19:22:25.665719 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zhw8t" podStartSLOduration=2.2666879140000002 podStartE2EDuration="4.665696779s" podCreationTimestamp="2026-02-27 19:22:21 +0000 UTC" firstStartedPulling="2026-02-27 19:22:22.620916454 +0000 UTC m=+2242.099697614" lastFinishedPulling="2026-02-27 19:22:25.019925319 +0000 UTC m=+2244.498706479" observedRunningTime="2026-02-27 19:22:25.663393948 +0000 UTC m=+2245.142175108" watchObservedRunningTime="2026-02-27 19:22:25.665696779 +0000 UTC m=+2245.144477939" Feb 27 19:22:31 crc kubenswrapper[4981]: I0227 19:22:31.971306 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zhw8t" Feb 27 19:22:31 crc kubenswrapper[4981]: I0227 19:22:31.977332 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zhw8t" Feb 27 19:22:32 crc kubenswrapper[4981]: I0227 19:22:32.028139 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zhw8t" Feb 27 19:22:32 crc kubenswrapper[4981]: I0227 19:22:32.747945 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zhw8t" Feb 27 19:22:32 crc kubenswrapper[4981]: I0227 19:22:32.805911 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zhw8t"] Feb 27 19:22:34 crc kubenswrapper[4981]: I0227 19:22:34.721114 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zhw8t" podUID="c04435bc-9195-4209-be9c-1219659aeca5" containerName="registry-server" containerID="cri-o://61e179cfcbd9b6ef562efc6aec59d4a7ce708d6aec0ee0a03c9427eaa2c6fbdc" gracePeriod=2 Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.690109 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zhw8t" Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.730965 4981 generic.go:334] "Generic (PLEG): container finished" podID="c04435bc-9195-4209-be9c-1219659aeca5" containerID="61e179cfcbd9b6ef562efc6aec59d4a7ce708d6aec0ee0a03c9427eaa2c6fbdc" exitCode=0 Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.731012 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zhw8t" event={"ID":"c04435bc-9195-4209-be9c-1219659aeca5","Type":"ContainerDied","Data":"61e179cfcbd9b6ef562efc6aec59d4a7ce708d6aec0ee0a03c9427eaa2c6fbdc"} Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.731126 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zhw8t" event={"ID":"c04435bc-9195-4209-be9c-1219659aeca5","Type":"ContainerDied","Data":"8f10d57dd2984767f221a17f75282271a94f075c85e3a58957bef5f30570bd19"} Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.731149 4981 scope.go:117] "RemoveContainer" containerID="61e179cfcbd9b6ef562efc6aec59d4a7ce708d6aec0ee0a03c9427eaa2c6fbdc" Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.732333 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zhw8t" Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.750292 4981 scope.go:117] "RemoveContainer" containerID="5281b43b811c2cd864f8ef8ad73db90eacd390d42e278387d61a988a657ebf90" Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.766269 4981 scope.go:117] "RemoveContainer" containerID="7decb2727cfc3e3311f23ac50e8bccc29b488478dc26ac34c4ce54f5798e9f54" Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.792397 4981 scope.go:117] "RemoveContainer" containerID="61e179cfcbd9b6ef562efc6aec59d4a7ce708d6aec0ee0a03c9427eaa2c6fbdc" Feb 27 19:22:35 crc kubenswrapper[4981]: E0227 19:22:35.792891 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61e179cfcbd9b6ef562efc6aec59d4a7ce708d6aec0ee0a03c9427eaa2c6fbdc\": container with ID starting with 61e179cfcbd9b6ef562efc6aec59d4a7ce708d6aec0ee0a03c9427eaa2c6fbdc not found: ID does not exist" containerID="61e179cfcbd9b6ef562efc6aec59d4a7ce708d6aec0ee0a03c9427eaa2c6fbdc" Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.792983 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61e179cfcbd9b6ef562efc6aec59d4a7ce708d6aec0ee0a03c9427eaa2c6fbdc"} err="failed to get container status \"61e179cfcbd9b6ef562efc6aec59d4a7ce708d6aec0ee0a03c9427eaa2c6fbdc\": rpc error: code = NotFound desc = could not find container \"61e179cfcbd9b6ef562efc6aec59d4a7ce708d6aec0ee0a03c9427eaa2c6fbdc\": container with ID starting with 61e179cfcbd9b6ef562efc6aec59d4a7ce708d6aec0ee0a03c9427eaa2c6fbdc not found: ID does not exist" Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.793046 4981 scope.go:117] "RemoveContainer" containerID="5281b43b811c2cd864f8ef8ad73db90eacd390d42e278387d61a988a657ebf90" Feb 27 19:22:35 crc kubenswrapper[4981]: E0227 19:22:35.793482 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5281b43b811c2cd864f8ef8ad73db90eacd390d42e278387d61a988a657ebf90\": container with ID starting with 5281b43b811c2cd864f8ef8ad73db90eacd390d42e278387d61a988a657ebf90 not found: ID does not exist" containerID="5281b43b811c2cd864f8ef8ad73db90eacd390d42e278387d61a988a657ebf90" Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.793516 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5281b43b811c2cd864f8ef8ad73db90eacd390d42e278387d61a988a657ebf90"} err="failed to get container status \"5281b43b811c2cd864f8ef8ad73db90eacd390d42e278387d61a988a657ebf90\": rpc error: code = NotFound desc = could not find container \"5281b43b811c2cd864f8ef8ad73db90eacd390d42e278387d61a988a657ebf90\": container with ID starting with 5281b43b811c2cd864f8ef8ad73db90eacd390d42e278387d61a988a657ebf90 not found: ID does not exist" Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.793541 4981 scope.go:117] "RemoveContainer" containerID="7decb2727cfc3e3311f23ac50e8bccc29b488478dc26ac34c4ce54f5798e9f54" Feb 27 19:22:35 crc kubenswrapper[4981]: E0227 19:22:35.793924 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7decb2727cfc3e3311f23ac50e8bccc29b488478dc26ac34c4ce54f5798e9f54\": container with ID starting with 7decb2727cfc3e3311f23ac50e8bccc29b488478dc26ac34c4ce54f5798e9f54 not found: ID does not exist" containerID="7decb2727cfc3e3311f23ac50e8bccc29b488478dc26ac34c4ce54f5798e9f54" Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.793957 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7decb2727cfc3e3311f23ac50e8bccc29b488478dc26ac34c4ce54f5798e9f54"} err="failed to get container status \"7decb2727cfc3e3311f23ac50e8bccc29b488478dc26ac34c4ce54f5798e9f54\": rpc error: code = NotFound desc = could not find container \"7decb2727cfc3e3311f23ac50e8bccc29b488478dc26ac34c4ce54f5798e9f54\": container with ID starting with 7decb2727cfc3e3311f23ac50e8bccc29b488478dc26ac34c4ce54f5798e9f54 not found: ID does not exist" Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.837340 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c04435bc-9195-4209-be9c-1219659aeca5-utilities\") pod \"c04435bc-9195-4209-be9c-1219659aeca5\" (UID: \"c04435bc-9195-4209-be9c-1219659aeca5\") " Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.837456 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c04435bc-9195-4209-be9c-1219659aeca5-catalog-content\") pod \"c04435bc-9195-4209-be9c-1219659aeca5\" (UID: \"c04435bc-9195-4209-be9c-1219659aeca5\") " Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.837525 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc95h\" (UniqueName: \"kubernetes.io/projected/c04435bc-9195-4209-be9c-1219659aeca5-kube-api-access-sc95h\") pod \"c04435bc-9195-4209-be9c-1219659aeca5\" (UID: \"c04435bc-9195-4209-be9c-1219659aeca5\") " Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.838283 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c04435bc-9195-4209-be9c-1219659aeca5-utilities" (OuterVolumeSpecName: "utilities") pod "c04435bc-9195-4209-be9c-1219659aeca5" (UID: "c04435bc-9195-4209-be9c-1219659aeca5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.839221 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c04435bc-9195-4209-be9c-1219659aeca5-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.842640 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04435bc-9195-4209-be9c-1219659aeca5-kube-api-access-sc95h" (OuterVolumeSpecName: "kube-api-access-sc95h") pod "c04435bc-9195-4209-be9c-1219659aeca5" (UID: "c04435bc-9195-4209-be9c-1219659aeca5"). InnerVolumeSpecName "kube-api-access-sc95h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.869697 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c04435bc-9195-4209-be9c-1219659aeca5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c04435bc-9195-4209-be9c-1219659aeca5" (UID: "c04435bc-9195-4209-be9c-1219659aeca5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.940451 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc95h\" (UniqueName: \"kubernetes.io/projected/c04435bc-9195-4209-be9c-1219659aeca5-kube-api-access-sc95h\") on node \"crc\" DevicePath \"\"" Feb 27 19:22:35 crc kubenswrapper[4981]: I0227 19:22:35.940492 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c04435bc-9195-4209-be9c-1219659aeca5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:22:36 crc kubenswrapper[4981]: I0227 19:22:36.073862 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zhw8t"] Feb 27 19:22:36 crc kubenswrapper[4981]: I0227 19:22:36.083648 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zhw8t"] Feb 27 19:22:37 crc kubenswrapper[4981]: I0227 19:22:37.639119 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c04435bc-9195-4209-be9c-1219659aeca5" path="/var/lib/kubelet/pods/c04435bc-9195-4209-be9c-1219659aeca5/volumes" Feb 27 19:22:50 crc kubenswrapper[4981]: I0227 19:22:50.248847 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:22:50 crc kubenswrapper[4981]: I0227 19:22:50.249380 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:23:20 crc kubenswrapper[4981]: I0227 19:23:20.248911 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:23:20 crc kubenswrapper[4981]: I0227 19:23:20.249530 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:23:21 crc kubenswrapper[4981]: I0227 19:23:21.774277 4981 scope.go:117] "RemoveContainer" containerID="9bf87d74da42f41aead6e2511ae77fd0b199c2f99ec7fddc34b2d5c03a436bfa" Feb 27 19:23:21 crc kubenswrapper[4981]: I0227 19:23:21.802733 4981 scope.go:117] "RemoveContainer" containerID="568583953d6093f1d60b2ba27dc4cb9c593eb884e21cfcf4f13ea7f777856965" Feb 27 19:23:21 crc kubenswrapper[4981]: I0227 19:23:21.822006 4981 scope.go:117] "RemoveContainer" containerID="85db6b13408ac6d2fc34958e4e235baace7ea42c2b7f4693b9c8c037070214e8" Feb 27 19:23:21 crc kubenswrapper[4981]: I0227 19:23:21.835871 4981 scope.go:117] "RemoveContainer" containerID="d457a0124f7f00549b4901eaf6afb0ec7c6f599305643ac259338e9d66f9806a" Feb 27 19:23:21 crc kubenswrapper[4981]: I0227 19:23:21.861261 4981 scope.go:117] "RemoveContainer" containerID="ac77ba0d4f1fbd810f57dffbc4656f2f271323b9ae46ef49af6af868539f9fb8" Feb 27 19:23:21 crc kubenswrapper[4981]: I0227 19:23:21.878771 4981 scope.go:117] "RemoveContainer" containerID="ea037a2a3049ae2cd6efb3f77f7b56bfc162c209d7183395934e2ad212d932e0" Feb 27 19:23:21 crc kubenswrapper[4981]: I0227 19:23:21.901121 4981 scope.go:117] "RemoveContainer" containerID="0107fd337927931131bec521f05b370a528d6b80221a1a8d41f45e98551f9de5" Feb 27 19:23:21 crc kubenswrapper[4981]: I0227 19:23:21.919472 4981 scope.go:117] "RemoveContainer" containerID="960714bfdb9322ccdb21cc4dfccbeb77cb41f8d69cf4cc989ddd293036a71f62" Feb 27 19:23:21 crc kubenswrapper[4981]: I0227 19:23:21.936771 4981 scope.go:117] "RemoveContainer" containerID="25fda4f10dfd0f44cf1cd3087585b51cb2ce5d153f1ed85faefc7e85006aee22" Feb 27 19:23:21 crc kubenswrapper[4981]: I0227 19:23:21.951698 4981 scope.go:117] "RemoveContainer" containerID="cafb740cb4b2feea0c56af385f6a4dd4d0f8dbdd880b5a3bb59ad79c5321cb28" Feb 27 19:23:21 crc kubenswrapper[4981]: I0227 19:23:21.968142 4981 scope.go:117] "RemoveContainer" containerID="104ad2208f27454f8e4b776bd22c70b0029b5027f407230759c896d3d4ea0beb" Feb 27 19:23:21 crc kubenswrapper[4981]: I0227 19:23:21.985604 4981 scope.go:117] "RemoveContainer" containerID="10e9733bd1a84beb0e9d3bdac8211f223dcef3ed3d8833b09543cc31cc3e56eb" Feb 27 19:23:22 crc kubenswrapper[4981]: I0227 19:23:22.000442 4981 scope.go:117] "RemoveContainer" containerID="b75fa8c30fe9ac22cea8e277b3c27abc3ec446c1219488c3feaebdf7810f7611" Feb 27 19:23:22 crc kubenswrapper[4981]: I0227 19:23:22.015329 4981 scope.go:117] "RemoveContainer" containerID="ee1beae4adf688b9d46b9f3dd39cba400ca8551a639f862cc5873a822addc8ca" Feb 27 19:23:50 crc kubenswrapper[4981]: I0227 19:23:50.249043 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:23:50 crc kubenswrapper[4981]: I0227 19:23:50.249520 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:23:50 crc kubenswrapper[4981]: I0227 19:23:50.249563 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 19:23:50 crc kubenswrapper[4981]: I0227 19:23:50.250222 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6"} pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:23:50 crc kubenswrapper[4981]: I0227 19:23:50.250274 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" containerID="cri-o://d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" gracePeriod=600 Feb 27 19:23:50 crc kubenswrapper[4981]: E0227 19:23:50.385625 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:23:51 crc kubenswrapper[4981]: I0227 19:23:51.353010 4981 generic.go:334] "Generic (PLEG): container finished" podID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" exitCode=0 Feb 27 19:23:51 crc kubenswrapper[4981]: I0227 19:23:51.353092 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerDied","Data":"d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6"} Feb 27 19:23:51 crc kubenswrapper[4981]: I0227 19:23:51.354111 4981 scope.go:117] "RemoveContainer" containerID="3eaaac0016632062e717d6fc785b5e6a960c31de063fc4ec9f829edb351d80fe" Feb 27 19:23:51 crc kubenswrapper[4981]: I0227 19:23:51.354598 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:23:51 crc kubenswrapper[4981]: E0227 19:23:51.354811 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:24:00 crc kubenswrapper[4981]: I0227 19:24:00.152923 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537004-rkvzd"] Feb 27 19:24:00 crc kubenswrapper[4981]: E0227 19:24:00.153892 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04435bc-9195-4209-be9c-1219659aeca5" containerName="registry-server" Feb 27 19:24:00 crc kubenswrapper[4981]: I0227 19:24:00.153906 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04435bc-9195-4209-be9c-1219659aeca5" containerName="registry-server" Feb 27 19:24:00 crc kubenswrapper[4981]: E0227 19:24:00.153924 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04435bc-9195-4209-be9c-1219659aeca5" containerName="extract-content" Feb 27 19:24:00 crc kubenswrapper[4981]: I0227 19:24:00.153933 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04435bc-9195-4209-be9c-1219659aeca5" containerName="extract-content" Feb 27 19:24:00 crc kubenswrapper[4981]: E0227 19:24:00.153947 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04435bc-9195-4209-be9c-1219659aeca5" containerName="extract-utilities" Feb 27 19:24:00 crc kubenswrapper[4981]: I0227 19:24:00.153956 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04435bc-9195-4209-be9c-1219659aeca5" containerName="extract-utilities" Feb 27 19:24:00 crc kubenswrapper[4981]: I0227 19:24:00.154149 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04435bc-9195-4209-be9c-1219659aeca5" containerName="registry-server" Feb 27 19:24:00 crc kubenswrapper[4981]: I0227 19:24:00.154720 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537004-rkvzd" Feb 27 19:24:00 crc kubenswrapper[4981]: I0227 19:24:00.157351 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:24:00 crc kubenswrapper[4981]: I0227 19:24:00.157540 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:24:00 crc kubenswrapper[4981]: I0227 19:24:00.157849 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:24:00 crc kubenswrapper[4981]: I0227 19:24:00.167723 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537004-rkvzd"] Feb 27 19:24:00 crc kubenswrapper[4981]: I0227 19:24:00.180366 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-478pw\" (UniqueName: \"kubernetes.io/projected/26dd5ebe-6ac6-4201-8ee8-aeb7744f5154-kube-api-access-478pw\") pod \"auto-csr-approver-29537004-rkvzd\" (UID: \"26dd5ebe-6ac6-4201-8ee8-aeb7744f5154\") " pod="openshift-infra/auto-csr-approver-29537004-rkvzd" Feb 27 19:24:00 crc kubenswrapper[4981]: I0227 19:24:00.281518 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-478pw\" (UniqueName: \"kubernetes.io/projected/26dd5ebe-6ac6-4201-8ee8-aeb7744f5154-kube-api-access-478pw\") pod \"auto-csr-approver-29537004-rkvzd\" (UID: \"26dd5ebe-6ac6-4201-8ee8-aeb7744f5154\") " pod="openshift-infra/auto-csr-approver-29537004-rkvzd" Feb 27 19:24:00 crc kubenswrapper[4981]: I0227 19:24:00.303858 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-478pw\" (UniqueName: \"kubernetes.io/projected/26dd5ebe-6ac6-4201-8ee8-aeb7744f5154-kube-api-access-478pw\") pod \"auto-csr-approver-29537004-rkvzd\" (UID: \"26dd5ebe-6ac6-4201-8ee8-aeb7744f5154\") " pod="openshift-infra/auto-csr-approver-29537004-rkvzd" Feb 27 19:24:00 crc kubenswrapper[4981]: I0227 19:24:00.483016 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537004-rkvzd" Feb 27 19:24:00 crc kubenswrapper[4981]: I0227 19:24:00.889996 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537004-rkvzd"] Feb 27 19:24:01 crc kubenswrapper[4981]: I0227 19:24:01.445472 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537004-rkvzd" event={"ID":"26dd5ebe-6ac6-4201-8ee8-aeb7744f5154","Type":"ContainerStarted","Data":"4b28afe43e8d0b6c23747a4f0b26b452683b4ba4db9d8cfa14caaeb1173d3f0b"} Feb 27 19:24:02 crc kubenswrapper[4981]: I0227 19:24:02.453672 4981 generic.go:334] "Generic (PLEG): container finished" podID="26dd5ebe-6ac6-4201-8ee8-aeb7744f5154" containerID="dc25501470957530ce00490314bfa4ed1be9479a98dcf7441185058c5e491fb0" exitCode=0 Feb 27 19:24:02 crc kubenswrapper[4981]: I0227 19:24:02.453774 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537004-rkvzd" event={"ID":"26dd5ebe-6ac6-4201-8ee8-aeb7744f5154","Type":"ContainerDied","Data":"dc25501470957530ce00490314bfa4ed1be9479a98dcf7441185058c5e491fb0"} Feb 27 19:24:03 crc kubenswrapper[4981]: I0227 19:24:03.742251 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537004-rkvzd" Feb 27 19:24:03 crc kubenswrapper[4981]: I0227 19:24:03.935412 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-478pw\" (UniqueName: \"kubernetes.io/projected/26dd5ebe-6ac6-4201-8ee8-aeb7744f5154-kube-api-access-478pw\") pod \"26dd5ebe-6ac6-4201-8ee8-aeb7744f5154\" (UID: \"26dd5ebe-6ac6-4201-8ee8-aeb7744f5154\") " Feb 27 19:24:03 crc kubenswrapper[4981]: I0227 19:24:03.940983 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26dd5ebe-6ac6-4201-8ee8-aeb7744f5154-kube-api-access-478pw" (OuterVolumeSpecName: "kube-api-access-478pw") pod "26dd5ebe-6ac6-4201-8ee8-aeb7744f5154" (UID: "26dd5ebe-6ac6-4201-8ee8-aeb7744f5154"). InnerVolumeSpecName "kube-api-access-478pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:24:04 crc kubenswrapper[4981]: I0227 19:24:04.036474 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-478pw\" (UniqueName: \"kubernetes.io/projected/26dd5ebe-6ac6-4201-8ee8-aeb7744f5154-kube-api-access-478pw\") on node \"crc\" DevicePath \"\"" Feb 27 19:24:04 crc kubenswrapper[4981]: I0227 19:24:04.475302 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537004-rkvzd" event={"ID":"26dd5ebe-6ac6-4201-8ee8-aeb7744f5154","Type":"ContainerDied","Data":"4b28afe43e8d0b6c23747a4f0b26b452683b4ba4db9d8cfa14caaeb1173d3f0b"} Feb 27 19:24:04 crc kubenswrapper[4981]: I0227 19:24:04.475356 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537004-rkvzd" Feb 27 19:24:04 crc kubenswrapper[4981]: I0227 19:24:04.475362 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b28afe43e8d0b6c23747a4f0b26b452683b4ba4db9d8cfa14caaeb1173d3f0b" Feb 27 19:24:04 crc kubenswrapper[4981]: I0227 19:24:04.838649 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536998-4jhhd"] Feb 27 19:24:04 crc kubenswrapper[4981]: I0227 19:24:04.843829 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536998-4jhhd"] Feb 27 19:24:05 crc kubenswrapper[4981]: I0227 19:24:05.628654 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:24:05 crc kubenswrapper[4981]: E0227 19:24:05.629620 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:24:05 crc kubenswrapper[4981]: I0227 19:24:05.639553 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="155a2ade-2146-4eb4-8f2a-956e1dbbc1c9" path="/var/lib/kubelet/pods/155a2ade-2146-4eb4-8f2a-956e1dbbc1c9/volumes" Feb 27 19:24:20 crc kubenswrapper[4981]: I0227 19:24:20.628087 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:24:20 crc kubenswrapper[4981]: E0227 19:24:20.628855 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:24:22 crc kubenswrapper[4981]: I0227 19:24:22.625172 4981 scope.go:117] "RemoveContainer" containerID="fb14701431afb434dd121e9dd7a42dbd8daf25a43721f12ab7a46f62783db48a" Feb 27 19:24:22 crc kubenswrapper[4981]: I0227 19:24:22.670879 4981 scope.go:117] "RemoveContainer" containerID="ad751a2a3b8f1441777f905245851d231f1a02d49eb9a9ac2a1fa328f8c6d264" Feb 27 19:24:22 crc kubenswrapper[4981]: I0227 19:24:22.704487 4981 scope.go:117] "RemoveContainer" containerID="327f56ac397dd5ee7f5a40a17d51b3b2b2a3f923f8d5565d24f976bac2c30580" Feb 27 19:24:31 crc kubenswrapper[4981]: I0227 19:24:31.632304 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:24:31 crc kubenswrapper[4981]: E0227 19:24:31.633173 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:24:46 crc kubenswrapper[4981]: I0227 19:24:46.629217 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:24:46 crc kubenswrapper[4981]: E0227 19:24:46.630002 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:24:58 crc kubenswrapper[4981]: I0227 19:24:58.628825 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:24:58 crc kubenswrapper[4981]: E0227 19:24:58.629561 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:25:11 crc kubenswrapper[4981]: I0227 19:25:11.633581 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:25:11 crc kubenswrapper[4981]: E0227 19:25:11.634440 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:25:22 crc kubenswrapper[4981]: I0227 19:25:22.760627 4981 scope.go:117] "RemoveContainer" containerID="007fd137645033b11c92c0c281ae4b91b4c7023beffad98c9d943dcce0e7b915" Feb 27 19:25:22 crc kubenswrapper[4981]: I0227 19:25:22.809410 4981 scope.go:117] "RemoveContainer" containerID="37d59483cf58af393b8691733cb08b1211746466334fa7c73d6a0931c5a94550" Feb 27 19:25:26 crc kubenswrapper[4981]: I0227 19:25:26.628262 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:25:26 crc kubenswrapper[4981]: E0227 19:25:26.628740 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:25:38 crc kubenswrapper[4981]: I0227 19:25:38.628495 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:25:38 crc kubenswrapper[4981]: E0227 19:25:38.628922 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:25:52 crc kubenswrapper[4981]: I0227 19:25:52.635035 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:25:52 crc kubenswrapper[4981]: E0227 19:25:52.636108 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:25:54 crc kubenswrapper[4981]: I0227 19:25:54.040860 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gjnzw"] Feb 27 19:25:54 crc kubenswrapper[4981]: E0227 19:25:54.041260 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26dd5ebe-6ac6-4201-8ee8-aeb7744f5154" containerName="oc" Feb 27 19:25:54 crc kubenswrapper[4981]: I0227 19:25:54.041278 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="26dd5ebe-6ac6-4201-8ee8-aeb7744f5154" containerName="oc" Feb 27 19:25:54 crc kubenswrapper[4981]: I0227 19:25:54.041457 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="26dd5ebe-6ac6-4201-8ee8-aeb7744f5154" containerName="oc" Feb 27 19:25:54 crc kubenswrapper[4981]: I0227 19:25:54.042668 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gjnzw" Feb 27 19:25:54 crc kubenswrapper[4981]: I0227 19:25:54.069414 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gjnzw"] Feb 27 19:25:54 crc kubenswrapper[4981]: I0227 19:25:54.071641 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ec7a63-f0f1-4725-af59-ddcea8d9ce36-catalog-content\") pod \"certified-operators-gjnzw\" (UID: \"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36\") " pod="openshift-marketplace/certified-operators-gjnzw" Feb 27 19:25:54 crc kubenswrapper[4981]: I0227 19:25:54.071690 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx52k\" (UniqueName: \"kubernetes.io/projected/b5ec7a63-f0f1-4725-af59-ddcea8d9ce36-kube-api-access-jx52k\") pod \"certified-operators-gjnzw\" (UID: \"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36\") " pod="openshift-marketplace/certified-operators-gjnzw" Feb 27 19:25:54 crc kubenswrapper[4981]: I0227 19:25:54.071887 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ec7a63-f0f1-4725-af59-ddcea8d9ce36-utilities\") pod \"certified-operators-gjnzw\" (UID: \"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36\") " pod="openshift-marketplace/certified-operators-gjnzw" Feb 27 19:25:54 crc kubenswrapper[4981]: I0227 19:25:54.173216 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ec7a63-f0f1-4725-af59-ddcea8d9ce36-catalog-content\") pod \"certified-operators-gjnzw\" (UID: \"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36\") " pod="openshift-marketplace/certified-operators-gjnzw" Feb 27 19:25:54 crc kubenswrapper[4981]: I0227 19:25:54.173325 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx52k\" (UniqueName: \"kubernetes.io/projected/b5ec7a63-f0f1-4725-af59-ddcea8d9ce36-kube-api-access-jx52k\") pod \"certified-operators-gjnzw\" (UID: \"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36\") " pod="openshift-marketplace/certified-operators-gjnzw" Feb 27 19:25:54 crc kubenswrapper[4981]: I0227 19:25:54.173366 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ec7a63-f0f1-4725-af59-ddcea8d9ce36-utilities\") pod \"certified-operators-gjnzw\" (UID: \"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36\") " pod="openshift-marketplace/certified-operators-gjnzw" Feb 27 19:25:54 crc kubenswrapper[4981]: I0227 19:25:54.173740 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ec7a63-f0f1-4725-af59-ddcea8d9ce36-catalog-content\") pod \"certified-operators-gjnzw\" (UID: \"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36\") " pod="openshift-marketplace/certified-operators-gjnzw" Feb 27 19:25:54 crc kubenswrapper[4981]: I0227 19:25:54.173800 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ec7a63-f0f1-4725-af59-ddcea8d9ce36-utilities\") pod \"certified-operators-gjnzw\" (UID: \"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36\") " pod="openshift-marketplace/certified-operators-gjnzw" Feb 27 19:25:54 crc kubenswrapper[4981]: I0227 19:25:54.192377 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx52k\" (UniqueName: \"kubernetes.io/projected/b5ec7a63-f0f1-4725-af59-ddcea8d9ce36-kube-api-access-jx52k\") pod \"certified-operators-gjnzw\" (UID: \"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36\") " pod="openshift-marketplace/certified-operators-gjnzw" Feb 27 19:25:54 crc kubenswrapper[4981]: I0227 19:25:54.364851 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gjnzw" Feb 27 19:25:54 crc kubenswrapper[4981]: I0227 19:25:54.817376 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gjnzw"] Feb 27 19:25:55 crc kubenswrapper[4981]: I0227 19:25:55.317568 4981 generic.go:334] "Generic (PLEG): container finished" podID="b5ec7a63-f0f1-4725-af59-ddcea8d9ce36" containerID="c93f269aa63eb4335a99e8555b5b952b6491d514b9a20cb4ff52c0f23d8a526f" exitCode=0 Feb 27 19:25:55 crc kubenswrapper[4981]: I0227 19:25:55.317622 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gjnzw" event={"ID":"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36","Type":"ContainerDied","Data":"c93f269aa63eb4335a99e8555b5b952b6491d514b9a20cb4ff52c0f23d8a526f"} Feb 27 19:25:55 crc kubenswrapper[4981]: I0227 19:25:55.318754 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gjnzw" event={"ID":"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36","Type":"ContainerStarted","Data":"0b491656336e1a4424e62b0b0a160788c58513a0630089c32994c109224be792"} Feb 27 19:25:55 crc kubenswrapper[4981]: I0227 19:25:55.319280 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:25:58 crc kubenswrapper[4981]: I0227 19:25:58.338683 4981 generic.go:334] "Generic (PLEG): container finished" podID="b5ec7a63-f0f1-4725-af59-ddcea8d9ce36" containerID="b5c59b597fa03f783c8465957010ad0335cf32271c773dde9f4b26a50fbcb4cc" exitCode=0 Feb 27 19:25:58 crc kubenswrapper[4981]: I0227 19:25:58.338744 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gjnzw" event={"ID":"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36","Type":"ContainerDied","Data":"b5c59b597fa03f783c8465957010ad0335cf32271c773dde9f4b26a50fbcb4cc"} Feb 27 19:26:00 crc kubenswrapper[4981]: I0227 19:26:00.139804 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537006-rzdzv"] Feb 27 19:26:00 crc kubenswrapper[4981]: I0227 19:26:00.141016 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537006-rzdzv" Feb 27 19:26:00 crc kubenswrapper[4981]: I0227 19:26:00.143095 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:26:00 crc kubenswrapper[4981]: I0227 19:26:00.143612 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:26:00 crc kubenswrapper[4981]: I0227 19:26:00.145479 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:26:00 crc kubenswrapper[4981]: I0227 19:26:00.154575 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537006-rzdzv"] Feb 27 19:26:00 crc kubenswrapper[4981]: I0227 19:26:00.275150 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4psj\" (UniqueName: \"kubernetes.io/projected/e8a1b68a-07b5-44d6-b47a-654d9823e2a2-kube-api-access-g4psj\") pod \"auto-csr-approver-29537006-rzdzv\" (UID: \"e8a1b68a-07b5-44d6-b47a-654d9823e2a2\") " pod="openshift-infra/auto-csr-approver-29537006-rzdzv" Feb 27 19:26:00 crc kubenswrapper[4981]: I0227 19:26:00.377269 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4psj\" (UniqueName: \"kubernetes.io/projected/e8a1b68a-07b5-44d6-b47a-654d9823e2a2-kube-api-access-g4psj\") pod \"auto-csr-approver-29537006-rzdzv\" (UID: \"e8a1b68a-07b5-44d6-b47a-654d9823e2a2\") " pod="openshift-infra/auto-csr-approver-29537006-rzdzv" Feb 27 19:26:00 crc kubenswrapper[4981]: I0227 19:26:00.393794 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4psj\" (UniqueName: \"kubernetes.io/projected/e8a1b68a-07b5-44d6-b47a-654d9823e2a2-kube-api-access-g4psj\") pod \"auto-csr-approver-29537006-rzdzv\" (UID: \"e8a1b68a-07b5-44d6-b47a-654d9823e2a2\") " pod="openshift-infra/auto-csr-approver-29537006-rzdzv" Feb 27 19:26:00 crc kubenswrapper[4981]: I0227 19:26:00.485085 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537006-rzdzv" Feb 27 19:26:01 crc kubenswrapper[4981]: I0227 19:26:01.363310 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gjnzw" event={"ID":"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36","Type":"ContainerStarted","Data":"30088ad2dd5fe71edb0c6a384084a80631b046114cf75135782749817239c832"} Feb 27 19:26:01 crc kubenswrapper[4981]: I0227 19:26:01.448364 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537006-rzdzv"] Feb 27 19:26:02 crc kubenswrapper[4981]: I0227 19:26:02.369511 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537006-rzdzv" event={"ID":"e8a1b68a-07b5-44d6-b47a-654d9823e2a2","Type":"ContainerStarted","Data":"e0645d9586167220a2863ca1e869b79bac7f1852af6e1b7321fcd43e1a87fa81"} Feb 27 19:26:02 crc kubenswrapper[4981]: I0227 19:26:02.392877 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gjnzw" podStartSLOduration=2.677679641 podStartE2EDuration="8.392857984s" podCreationTimestamp="2026-02-27 19:25:54 +0000 UTC" firstStartedPulling="2026-02-27 19:25:55.319081635 +0000 UTC m=+2454.797862795" lastFinishedPulling="2026-02-27 19:26:01.034259988 +0000 UTC m=+2460.513041138" observedRunningTime="2026-02-27 19:26:02.384474382 +0000 UTC m=+2461.863255562" watchObservedRunningTime="2026-02-27 19:26:02.392857984 +0000 UTC m=+2461.871639134" Feb 27 19:26:04 crc kubenswrapper[4981]: I0227 19:26:04.365561 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gjnzw" Feb 27 19:26:04 crc kubenswrapper[4981]: I0227 19:26:04.366508 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gjnzw" Feb 27 19:26:04 crc kubenswrapper[4981]: I0227 19:26:04.411331 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gjnzw" Feb 27 19:26:05 crc kubenswrapper[4981]: I0227 19:26:05.628985 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:26:05 crc kubenswrapper[4981]: E0227 19:26:05.629565 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:26:06 crc kubenswrapper[4981]: I0227 19:26:06.400168 4981 generic.go:334] "Generic (PLEG): container finished" podID="e8a1b68a-07b5-44d6-b47a-654d9823e2a2" containerID="052950dd787b95cfeefdbf43bc5fa5522687c017c9a6ff66f8b38174264e5a74" exitCode=0 Feb 27 19:26:06 crc kubenswrapper[4981]: I0227 19:26:06.400248 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537006-rzdzv" event={"ID":"e8a1b68a-07b5-44d6-b47a-654d9823e2a2","Type":"ContainerDied","Data":"052950dd787b95cfeefdbf43bc5fa5522687c017c9a6ff66f8b38174264e5a74"} Feb 27 19:26:07 crc kubenswrapper[4981]: I0227 19:26:07.658345 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537006-rzdzv" Feb 27 19:26:07 crc kubenswrapper[4981]: I0227 19:26:07.676288 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4psj\" (UniqueName: \"kubernetes.io/projected/e8a1b68a-07b5-44d6-b47a-654d9823e2a2-kube-api-access-g4psj\") pod \"e8a1b68a-07b5-44d6-b47a-654d9823e2a2\" (UID: \"e8a1b68a-07b5-44d6-b47a-654d9823e2a2\") " Feb 27 19:26:07 crc kubenswrapper[4981]: I0227 19:26:07.681621 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8a1b68a-07b5-44d6-b47a-654d9823e2a2-kube-api-access-g4psj" (OuterVolumeSpecName: "kube-api-access-g4psj") pod "e8a1b68a-07b5-44d6-b47a-654d9823e2a2" (UID: "e8a1b68a-07b5-44d6-b47a-654d9823e2a2"). InnerVolumeSpecName "kube-api-access-g4psj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:26:07 crc kubenswrapper[4981]: I0227 19:26:07.779021 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4psj\" (UniqueName: \"kubernetes.io/projected/e8a1b68a-07b5-44d6-b47a-654d9823e2a2-kube-api-access-g4psj\") on node \"crc\" DevicePath \"\"" Feb 27 19:26:08 crc kubenswrapper[4981]: I0227 19:26:08.419617 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537006-rzdzv" event={"ID":"e8a1b68a-07b5-44d6-b47a-654d9823e2a2","Type":"ContainerDied","Data":"e0645d9586167220a2863ca1e869b79bac7f1852af6e1b7321fcd43e1a87fa81"} Feb 27 19:26:08 crc kubenswrapper[4981]: I0227 19:26:08.419657 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0645d9586167220a2863ca1e869b79bac7f1852af6e1b7321fcd43e1a87fa81" Feb 27 19:26:08 crc kubenswrapper[4981]: I0227 19:26:08.419713 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537006-rzdzv" Feb 27 19:26:08 crc kubenswrapper[4981]: I0227 19:26:08.727025 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537000-sn5s2"] Feb 27 19:26:08 crc kubenswrapper[4981]: I0227 19:26:08.735283 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537000-sn5s2"] Feb 27 19:26:09 crc kubenswrapper[4981]: I0227 19:26:09.637819 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf7d5ec0-a275-4956-8a1d-9455ffd87ee5" path="/var/lib/kubelet/pods/bf7d5ec0-a275-4956-8a1d-9455ffd87ee5/volumes" Feb 27 19:26:14 crc kubenswrapper[4981]: I0227 19:26:14.421671 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gjnzw" Feb 27 19:26:14 crc kubenswrapper[4981]: I0227 19:26:14.467974 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gjnzw"] Feb 27 19:26:14 crc kubenswrapper[4981]: I0227 19:26:14.468219 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gjnzw" podUID="b5ec7a63-f0f1-4725-af59-ddcea8d9ce36" containerName="registry-server" containerID="cri-o://30088ad2dd5fe71edb0c6a384084a80631b046114cf75135782749817239c832" gracePeriod=2 Feb 27 19:26:15 crc kubenswrapper[4981]: I0227 19:26:15.470595 4981 generic.go:334] "Generic (PLEG): container finished" podID="b5ec7a63-f0f1-4725-af59-ddcea8d9ce36" containerID="30088ad2dd5fe71edb0c6a384084a80631b046114cf75135782749817239c832" exitCode=0 Feb 27 19:26:15 crc kubenswrapper[4981]: I0227 19:26:15.470636 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gjnzw" event={"ID":"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36","Type":"ContainerDied","Data":"30088ad2dd5fe71edb0c6a384084a80631b046114cf75135782749817239c832"} Feb 27 19:26:16 crc kubenswrapper[4981]: I0227 19:26:16.103131 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gjnzw" Feb 27 19:26:16 crc kubenswrapper[4981]: I0227 19:26:16.286089 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ec7a63-f0f1-4725-af59-ddcea8d9ce36-catalog-content\") pod \"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36\" (UID: \"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36\") " Feb 27 19:26:16 crc kubenswrapper[4981]: I0227 19:26:16.286241 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ec7a63-f0f1-4725-af59-ddcea8d9ce36-utilities\") pod \"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36\" (UID: \"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36\") " Feb 27 19:26:16 crc kubenswrapper[4981]: I0227 19:26:16.286276 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx52k\" (UniqueName: \"kubernetes.io/projected/b5ec7a63-f0f1-4725-af59-ddcea8d9ce36-kube-api-access-jx52k\") pod \"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36\" (UID: \"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36\") " Feb 27 19:26:16 crc kubenswrapper[4981]: I0227 19:26:16.286954 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5ec7a63-f0f1-4725-af59-ddcea8d9ce36-utilities" (OuterVolumeSpecName: "utilities") pod "b5ec7a63-f0f1-4725-af59-ddcea8d9ce36" (UID: "b5ec7a63-f0f1-4725-af59-ddcea8d9ce36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:26:16 crc kubenswrapper[4981]: I0227 19:26:16.287277 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ec7a63-f0f1-4725-af59-ddcea8d9ce36-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:26:16 crc kubenswrapper[4981]: I0227 19:26:16.291847 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5ec7a63-f0f1-4725-af59-ddcea8d9ce36-kube-api-access-jx52k" (OuterVolumeSpecName: "kube-api-access-jx52k") pod "b5ec7a63-f0f1-4725-af59-ddcea8d9ce36" (UID: "b5ec7a63-f0f1-4725-af59-ddcea8d9ce36"). InnerVolumeSpecName "kube-api-access-jx52k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:26:16 crc kubenswrapper[4981]: I0227 19:26:16.339222 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5ec7a63-f0f1-4725-af59-ddcea8d9ce36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5ec7a63-f0f1-4725-af59-ddcea8d9ce36" (UID: "b5ec7a63-f0f1-4725-af59-ddcea8d9ce36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:26:16 crc kubenswrapper[4981]: I0227 19:26:16.388423 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ec7a63-f0f1-4725-af59-ddcea8d9ce36-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:26:16 crc kubenswrapper[4981]: I0227 19:26:16.388471 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx52k\" (UniqueName: \"kubernetes.io/projected/b5ec7a63-f0f1-4725-af59-ddcea8d9ce36-kube-api-access-jx52k\") on node \"crc\" DevicePath \"\"" Feb 27 19:26:16 crc kubenswrapper[4981]: I0227 19:26:16.479314 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gjnzw" event={"ID":"b5ec7a63-f0f1-4725-af59-ddcea8d9ce36","Type":"ContainerDied","Data":"0b491656336e1a4424e62b0b0a160788c58513a0630089c32994c109224be792"} Feb 27 19:26:16 crc kubenswrapper[4981]: I0227 19:26:16.479373 4981 scope.go:117] "RemoveContainer" containerID="30088ad2dd5fe71edb0c6a384084a80631b046114cf75135782749817239c832" Feb 27 19:26:16 crc kubenswrapper[4981]: I0227 19:26:16.479379 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gjnzw" Feb 27 19:26:16 crc kubenswrapper[4981]: I0227 19:26:16.495577 4981 scope.go:117] "RemoveContainer" containerID="b5c59b597fa03f783c8465957010ad0335cf32271c773dde9f4b26a50fbcb4cc" Feb 27 19:26:16 crc kubenswrapper[4981]: I0227 19:26:16.511194 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gjnzw"] Feb 27 19:26:16 crc kubenswrapper[4981]: I0227 19:26:16.516870 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gjnzw"] Feb 27 19:26:16 crc kubenswrapper[4981]: I0227 19:26:16.536488 4981 scope.go:117] "RemoveContainer" containerID="c93f269aa63eb4335a99e8555b5b952b6491d514b9a20cb4ff52c0f23d8a526f" Feb 27 19:26:17 crc kubenswrapper[4981]: I0227 19:26:17.636998 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5ec7a63-f0f1-4725-af59-ddcea8d9ce36" path="/var/lib/kubelet/pods/b5ec7a63-f0f1-4725-af59-ddcea8d9ce36/volumes" Feb 27 19:26:19 crc kubenswrapper[4981]: I0227 19:26:19.628646 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:26:19 crc kubenswrapper[4981]: E0227 19:26:19.628858 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:26:22 crc kubenswrapper[4981]: I0227 19:26:22.874639 4981 scope.go:117] "RemoveContainer" containerID="90fe2973c9b0433a16b5e1e8b5ca508e7716c85505362b6649ccbd173f33b91c" Feb 27 19:26:22 crc kubenswrapper[4981]: I0227 19:26:22.921982 4981 scope.go:117] "RemoveContainer" containerID="504e7e59163e3827c50e33fcb947ea0b2ecd06d752de8106338c645cdfc2fc77" Feb 27 19:26:32 crc kubenswrapper[4981]: I0227 19:26:32.628644 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:26:32 crc kubenswrapper[4981]: E0227 19:26:32.629435 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:26:43 crc kubenswrapper[4981]: I0227 19:26:43.628820 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:26:43 crc kubenswrapper[4981]: E0227 19:26:43.629460 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:26:51 crc kubenswrapper[4981]: I0227 19:26:51.958906 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m9htc"] Feb 27 19:26:51 crc kubenswrapper[4981]: E0227 19:26:51.959611 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8a1b68a-07b5-44d6-b47a-654d9823e2a2" containerName="oc" Feb 27 19:26:51 crc kubenswrapper[4981]: I0227 19:26:51.959623 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8a1b68a-07b5-44d6-b47a-654d9823e2a2" containerName="oc" Feb 27 19:26:51 crc kubenswrapper[4981]: E0227 19:26:51.959641 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ec7a63-f0f1-4725-af59-ddcea8d9ce36" containerName="extract-content" Feb 27 19:26:51 crc kubenswrapper[4981]: I0227 19:26:51.959647 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ec7a63-f0f1-4725-af59-ddcea8d9ce36" containerName="extract-content" Feb 27 19:26:51 crc kubenswrapper[4981]: E0227 19:26:51.959655 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ec7a63-f0f1-4725-af59-ddcea8d9ce36" containerName="extract-utilities" Feb 27 19:26:51 crc kubenswrapper[4981]: I0227 19:26:51.959661 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ec7a63-f0f1-4725-af59-ddcea8d9ce36" containerName="extract-utilities" Feb 27 19:26:51 crc kubenswrapper[4981]: E0227 19:26:51.959669 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ec7a63-f0f1-4725-af59-ddcea8d9ce36" containerName="registry-server" Feb 27 19:26:51 crc kubenswrapper[4981]: I0227 19:26:51.959675 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ec7a63-f0f1-4725-af59-ddcea8d9ce36" containerName="registry-server" Feb 27 19:26:51 crc kubenswrapper[4981]: I0227 19:26:51.959800 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8a1b68a-07b5-44d6-b47a-654d9823e2a2" containerName="oc" Feb 27 19:26:51 crc kubenswrapper[4981]: I0227 19:26:51.959815 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5ec7a63-f0f1-4725-af59-ddcea8d9ce36" containerName="registry-server" Feb 27 19:26:51 crc kubenswrapper[4981]: I0227 19:26:51.960793 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m9htc" Feb 27 19:26:51 crc kubenswrapper[4981]: I0227 19:26:51.977711 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m9htc"] Feb 27 19:26:52 crc kubenswrapper[4981]: I0227 19:26:52.061909 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztvc2\" (UniqueName: \"kubernetes.io/projected/8309757c-93d5-4250-8f09-fe20b6fd6835-kube-api-access-ztvc2\") pod \"redhat-operators-m9htc\" (UID: \"8309757c-93d5-4250-8f09-fe20b6fd6835\") " pod="openshift-marketplace/redhat-operators-m9htc" Feb 27 19:26:52 crc kubenswrapper[4981]: I0227 19:26:52.061983 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8309757c-93d5-4250-8f09-fe20b6fd6835-utilities\") pod \"redhat-operators-m9htc\" (UID: \"8309757c-93d5-4250-8f09-fe20b6fd6835\") " pod="openshift-marketplace/redhat-operators-m9htc" Feb 27 19:26:52 crc kubenswrapper[4981]: I0227 19:26:52.062007 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8309757c-93d5-4250-8f09-fe20b6fd6835-catalog-content\") pod \"redhat-operators-m9htc\" (UID: \"8309757c-93d5-4250-8f09-fe20b6fd6835\") " pod="openshift-marketplace/redhat-operators-m9htc" Feb 27 19:26:52 crc kubenswrapper[4981]: I0227 19:26:52.162708 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztvc2\" (UniqueName: \"kubernetes.io/projected/8309757c-93d5-4250-8f09-fe20b6fd6835-kube-api-access-ztvc2\") pod \"redhat-operators-m9htc\" (UID: \"8309757c-93d5-4250-8f09-fe20b6fd6835\") " pod="openshift-marketplace/redhat-operators-m9htc" Feb 27 19:26:52 crc kubenswrapper[4981]: I0227 19:26:52.163104 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8309757c-93d5-4250-8f09-fe20b6fd6835-utilities\") pod \"redhat-operators-m9htc\" (UID: \"8309757c-93d5-4250-8f09-fe20b6fd6835\") " pod="openshift-marketplace/redhat-operators-m9htc" Feb 27 19:26:52 crc kubenswrapper[4981]: I0227 19:26:52.163228 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8309757c-93d5-4250-8f09-fe20b6fd6835-catalog-content\") pod \"redhat-operators-m9htc\" (UID: \"8309757c-93d5-4250-8f09-fe20b6fd6835\") " pod="openshift-marketplace/redhat-operators-m9htc" Feb 27 19:26:52 crc kubenswrapper[4981]: I0227 19:26:52.163696 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8309757c-93d5-4250-8f09-fe20b6fd6835-utilities\") pod \"redhat-operators-m9htc\" (UID: \"8309757c-93d5-4250-8f09-fe20b6fd6835\") " pod="openshift-marketplace/redhat-operators-m9htc" Feb 27 19:26:52 crc kubenswrapper[4981]: I0227 19:26:52.163705 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8309757c-93d5-4250-8f09-fe20b6fd6835-catalog-content\") pod \"redhat-operators-m9htc\" (UID: \"8309757c-93d5-4250-8f09-fe20b6fd6835\") " pod="openshift-marketplace/redhat-operators-m9htc" Feb 27 19:26:52 crc kubenswrapper[4981]: I0227 19:26:52.182466 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztvc2\" (UniqueName: \"kubernetes.io/projected/8309757c-93d5-4250-8f09-fe20b6fd6835-kube-api-access-ztvc2\") pod \"redhat-operators-m9htc\" (UID: \"8309757c-93d5-4250-8f09-fe20b6fd6835\") " pod="openshift-marketplace/redhat-operators-m9htc" Feb 27 19:26:52 crc kubenswrapper[4981]: I0227 19:26:52.287037 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m9htc" Feb 27 19:26:52 crc kubenswrapper[4981]: I0227 19:26:52.491599 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m9htc"] Feb 27 19:26:52 crc kubenswrapper[4981]: I0227 19:26:52.716072 4981 generic.go:334] "Generic (PLEG): container finished" podID="8309757c-93d5-4250-8f09-fe20b6fd6835" containerID="9d46db41e4073ec34078f8da6fd89552a9bf79518a408e6f46b88104103495e0" exitCode=0 Feb 27 19:26:52 crc kubenswrapper[4981]: I0227 19:26:52.716114 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9htc" event={"ID":"8309757c-93d5-4250-8f09-fe20b6fd6835","Type":"ContainerDied","Data":"9d46db41e4073ec34078f8da6fd89552a9bf79518a408e6f46b88104103495e0"} Feb 27 19:26:52 crc kubenswrapper[4981]: I0227 19:26:52.716139 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9htc" event={"ID":"8309757c-93d5-4250-8f09-fe20b6fd6835","Type":"ContainerStarted","Data":"238946f3c18fe014ff0fc79e5d6dabf9d624b9afac49ebaeab2f0ee9604b1b0f"} Feb 27 19:26:52 crc kubenswrapper[4981]: E0227 19:26:52.731242 4981 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8309757c_93d5_4250_8f09_fe20b6fd6835.slice/crio-conmon-9d46db41e4073ec34078f8da6fd89552a9bf79518a408e6f46b88104103495e0.scope\": RecentStats: unable to find data in memory cache]" Feb 27 19:26:53 crc kubenswrapper[4981]: I0227 19:26:53.724186 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9htc" event={"ID":"8309757c-93d5-4250-8f09-fe20b6fd6835","Type":"ContainerStarted","Data":"42a63cc7b186ae1e7e7af2f5c4926b8b83992322afa4eeb1d940947e07f3a5f6"} Feb 27 19:26:54 crc kubenswrapper[4981]: I0227 19:26:54.732834 4981 generic.go:334] "Generic (PLEG): container finished" podID="8309757c-93d5-4250-8f09-fe20b6fd6835" containerID="42a63cc7b186ae1e7e7af2f5c4926b8b83992322afa4eeb1d940947e07f3a5f6" exitCode=0 Feb 27 19:26:54 crc kubenswrapper[4981]: I0227 19:26:54.733228 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9htc" event={"ID":"8309757c-93d5-4250-8f09-fe20b6fd6835","Type":"ContainerDied","Data":"42a63cc7b186ae1e7e7af2f5c4926b8b83992322afa4eeb1d940947e07f3a5f6"} Feb 27 19:26:55 crc kubenswrapper[4981]: I0227 19:26:55.740952 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9htc" event={"ID":"8309757c-93d5-4250-8f09-fe20b6fd6835","Type":"ContainerStarted","Data":"dfae10fa58b4f4b0a4d1e5af01279c599655e4f3e4ce6260ea5eb9bc07b2e340"} Feb 27 19:26:55 crc kubenswrapper[4981]: I0227 19:26:55.759362 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m9htc" podStartSLOduration=2.3752166470000002 podStartE2EDuration="4.759342402s" podCreationTimestamp="2026-02-27 19:26:51 +0000 UTC" firstStartedPulling="2026-02-27 19:26:52.719331205 +0000 UTC m=+2512.198112365" lastFinishedPulling="2026-02-27 19:26:55.10345696 +0000 UTC m=+2514.582238120" observedRunningTime="2026-02-27 19:26:55.755916445 +0000 UTC m=+2515.234697605" watchObservedRunningTime="2026-02-27 19:26:55.759342402 +0000 UTC m=+2515.238123562" Feb 27 19:26:56 crc kubenswrapper[4981]: I0227 19:26:56.627991 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:26:56 crc kubenswrapper[4981]: E0227 19:26:56.628308 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:27:02 crc kubenswrapper[4981]: I0227 19:27:02.288171 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m9htc" Feb 27 19:27:02 crc kubenswrapper[4981]: I0227 19:27:02.288628 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m9htc" Feb 27 19:27:02 crc kubenswrapper[4981]: I0227 19:27:02.332368 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m9htc" Feb 27 19:27:02 crc kubenswrapper[4981]: I0227 19:27:02.834819 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m9htc" Feb 27 19:27:02 crc kubenswrapper[4981]: I0227 19:27:02.881849 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m9htc"] Feb 27 19:27:04 crc kubenswrapper[4981]: I0227 19:27:04.797846 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m9htc" podUID="8309757c-93d5-4250-8f09-fe20b6fd6835" containerName="registry-server" containerID="cri-o://dfae10fa58b4f4b0a4d1e5af01279c599655e4f3e4ce6260ea5eb9bc07b2e340" gracePeriod=2 Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.278137 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m9htc" Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.355457 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztvc2\" (UniqueName: \"kubernetes.io/projected/8309757c-93d5-4250-8f09-fe20b6fd6835-kube-api-access-ztvc2\") pod \"8309757c-93d5-4250-8f09-fe20b6fd6835\" (UID: \"8309757c-93d5-4250-8f09-fe20b6fd6835\") " Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.355524 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8309757c-93d5-4250-8f09-fe20b6fd6835-catalog-content\") pod \"8309757c-93d5-4250-8f09-fe20b6fd6835\" (UID: \"8309757c-93d5-4250-8f09-fe20b6fd6835\") " Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.355572 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8309757c-93d5-4250-8f09-fe20b6fd6835-utilities\") pod \"8309757c-93d5-4250-8f09-fe20b6fd6835\" (UID: \"8309757c-93d5-4250-8f09-fe20b6fd6835\") " Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.356468 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8309757c-93d5-4250-8f09-fe20b6fd6835-utilities" (OuterVolumeSpecName: "utilities") pod "8309757c-93d5-4250-8f09-fe20b6fd6835" (UID: "8309757c-93d5-4250-8f09-fe20b6fd6835"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.361655 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8309757c-93d5-4250-8f09-fe20b6fd6835-kube-api-access-ztvc2" (OuterVolumeSpecName: "kube-api-access-ztvc2") pod "8309757c-93d5-4250-8f09-fe20b6fd6835" (UID: "8309757c-93d5-4250-8f09-fe20b6fd6835"). InnerVolumeSpecName "kube-api-access-ztvc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.457547 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8309757c-93d5-4250-8f09-fe20b6fd6835-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.457622 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztvc2\" (UniqueName: \"kubernetes.io/projected/8309757c-93d5-4250-8f09-fe20b6fd6835-kube-api-access-ztvc2\") on node \"crc\" DevicePath \"\"" Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.484848 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8309757c-93d5-4250-8f09-fe20b6fd6835-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8309757c-93d5-4250-8f09-fe20b6fd6835" (UID: "8309757c-93d5-4250-8f09-fe20b6fd6835"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.559506 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8309757c-93d5-4250-8f09-fe20b6fd6835-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.811862 4981 generic.go:334] "Generic (PLEG): container finished" podID="8309757c-93d5-4250-8f09-fe20b6fd6835" containerID="dfae10fa58b4f4b0a4d1e5af01279c599655e4f3e4ce6260ea5eb9bc07b2e340" exitCode=0 Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.811891 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m9htc" Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.811907 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9htc" event={"ID":"8309757c-93d5-4250-8f09-fe20b6fd6835","Type":"ContainerDied","Data":"dfae10fa58b4f4b0a4d1e5af01279c599655e4f3e4ce6260ea5eb9bc07b2e340"} Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.812106 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9htc" event={"ID":"8309757c-93d5-4250-8f09-fe20b6fd6835","Type":"ContainerDied","Data":"238946f3c18fe014ff0fc79e5d6dabf9d624b9afac49ebaeab2f0ee9604b1b0f"} Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.812133 4981 scope.go:117] "RemoveContainer" containerID="dfae10fa58b4f4b0a4d1e5af01279c599655e4f3e4ce6260ea5eb9bc07b2e340" Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.829472 4981 scope.go:117] "RemoveContainer" containerID="42a63cc7b186ae1e7e7af2f5c4926b8b83992322afa4eeb1d940947e07f3a5f6" Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.852589 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m9htc"] Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.854721 4981 scope.go:117] "RemoveContainer" containerID="9d46db41e4073ec34078f8da6fd89552a9bf79518a408e6f46b88104103495e0" Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.857863 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m9htc"] Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.876149 4981 scope.go:117] "RemoveContainer" containerID="dfae10fa58b4f4b0a4d1e5af01279c599655e4f3e4ce6260ea5eb9bc07b2e340" Feb 27 19:27:06 crc kubenswrapper[4981]: E0227 19:27:06.876616 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfae10fa58b4f4b0a4d1e5af01279c599655e4f3e4ce6260ea5eb9bc07b2e340\": container with ID starting with dfae10fa58b4f4b0a4d1e5af01279c599655e4f3e4ce6260ea5eb9bc07b2e340 not found: ID does not exist" containerID="dfae10fa58b4f4b0a4d1e5af01279c599655e4f3e4ce6260ea5eb9bc07b2e340" Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.876653 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfae10fa58b4f4b0a4d1e5af01279c599655e4f3e4ce6260ea5eb9bc07b2e340"} err="failed to get container status \"dfae10fa58b4f4b0a4d1e5af01279c599655e4f3e4ce6260ea5eb9bc07b2e340\": rpc error: code = NotFound desc = could not find container \"dfae10fa58b4f4b0a4d1e5af01279c599655e4f3e4ce6260ea5eb9bc07b2e340\": container with ID starting with dfae10fa58b4f4b0a4d1e5af01279c599655e4f3e4ce6260ea5eb9bc07b2e340 not found: ID does not exist" Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.876678 4981 scope.go:117] "RemoveContainer" containerID="42a63cc7b186ae1e7e7af2f5c4926b8b83992322afa4eeb1d940947e07f3a5f6" Feb 27 19:27:06 crc kubenswrapper[4981]: E0227 19:27:06.877433 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42a63cc7b186ae1e7e7af2f5c4926b8b83992322afa4eeb1d940947e07f3a5f6\": container with ID starting with 42a63cc7b186ae1e7e7af2f5c4926b8b83992322afa4eeb1d940947e07f3a5f6 not found: ID does not exist" containerID="42a63cc7b186ae1e7e7af2f5c4926b8b83992322afa4eeb1d940947e07f3a5f6" Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.877489 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42a63cc7b186ae1e7e7af2f5c4926b8b83992322afa4eeb1d940947e07f3a5f6"} err="failed to get container status \"42a63cc7b186ae1e7e7af2f5c4926b8b83992322afa4eeb1d940947e07f3a5f6\": rpc error: code = NotFound desc = could not find container \"42a63cc7b186ae1e7e7af2f5c4926b8b83992322afa4eeb1d940947e07f3a5f6\": container with ID starting with 42a63cc7b186ae1e7e7af2f5c4926b8b83992322afa4eeb1d940947e07f3a5f6 not found: ID does not exist" Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.877520 4981 scope.go:117] "RemoveContainer" containerID="9d46db41e4073ec34078f8da6fd89552a9bf79518a408e6f46b88104103495e0" Feb 27 19:27:06 crc kubenswrapper[4981]: E0227 19:27:06.878020 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d46db41e4073ec34078f8da6fd89552a9bf79518a408e6f46b88104103495e0\": container with ID starting with 9d46db41e4073ec34078f8da6fd89552a9bf79518a408e6f46b88104103495e0 not found: ID does not exist" containerID="9d46db41e4073ec34078f8da6fd89552a9bf79518a408e6f46b88104103495e0" Feb 27 19:27:06 crc kubenswrapper[4981]: I0227 19:27:06.878070 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d46db41e4073ec34078f8da6fd89552a9bf79518a408e6f46b88104103495e0"} err="failed to get container status \"9d46db41e4073ec34078f8da6fd89552a9bf79518a408e6f46b88104103495e0\": rpc error: code = NotFound desc = could not find container \"9d46db41e4073ec34078f8da6fd89552a9bf79518a408e6f46b88104103495e0\": container with ID starting with 9d46db41e4073ec34078f8da6fd89552a9bf79518a408e6f46b88104103495e0 not found: ID does not exist" Feb 27 19:27:07 crc kubenswrapper[4981]: I0227 19:27:07.636560 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8309757c-93d5-4250-8f09-fe20b6fd6835" path="/var/lib/kubelet/pods/8309757c-93d5-4250-8f09-fe20b6fd6835/volumes" Feb 27 19:27:08 crc kubenswrapper[4981]: I0227 19:27:08.628676 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:27:08 crc kubenswrapper[4981]: E0227 19:27:08.628925 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:27:23 crc kubenswrapper[4981]: I0227 19:27:23.629292 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:27:23 crc kubenswrapper[4981]: E0227 19:27:23.629714 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:27:34 crc kubenswrapper[4981]: I0227 19:27:34.628823 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:27:34 crc kubenswrapper[4981]: E0227 19:27:34.629780 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:27:49 crc kubenswrapper[4981]: I0227 19:27:49.628755 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:27:49 crc kubenswrapper[4981]: E0227 19:27:49.629245 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:28:00 crc kubenswrapper[4981]: I0227 19:28:00.139682 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537008-s49nx"] Feb 27 19:28:00 crc kubenswrapper[4981]: E0227 19:28:00.140610 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8309757c-93d5-4250-8f09-fe20b6fd6835" containerName="extract-utilities" Feb 27 19:28:00 crc kubenswrapper[4981]: I0227 19:28:00.140628 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="8309757c-93d5-4250-8f09-fe20b6fd6835" containerName="extract-utilities" Feb 27 19:28:00 crc kubenswrapper[4981]: E0227 19:28:00.140645 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8309757c-93d5-4250-8f09-fe20b6fd6835" containerName="registry-server" Feb 27 19:28:00 crc kubenswrapper[4981]: I0227 19:28:00.140653 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="8309757c-93d5-4250-8f09-fe20b6fd6835" containerName="registry-server" Feb 27 19:28:00 crc kubenswrapper[4981]: E0227 19:28:00.140672 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8309757c-93d5-4250-8f09-fe20b6fd6835" containerName="extract-content" Feb 27 19:28:00 crc kubenswrapper[4981]: I0227 19:28:00.140680 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="8309757c-93d5-4250-8f09-fe20b6fd6835" containerName="extract-content" Feb 27 19:28:00 crc kubenswrapper[4981]: I0227 19:28:00.140855 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="8309757c-93d5-4250-8f09-fe20b6fd6835" containerName="registry-server" Feb 27 19:28:00 crc kubenswrapper[4981]: I0227 19:28:00.141425 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537008-s49nx" Feb 27 19:28:00 crc kubenswrapper[4981]: I0227 19:28:00.143608 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:28:00 crc kubenswrapper[4981]: I0227 19:28:00.143756 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:28:00 crc kubenswrapper[4981]: I0227 19:28:00.143906 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:28:00 crc kubenswrapper[4981]: I0227 19:28:00.146114 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537008-s49nx"] Feb 27 19:28:00 crc kubenswrapper[4981]: I0227 19:28:00.284483 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqwpn\" (UniqueName: \"kubernetes.io/projected/fdbe29ac-7373-4cf0-868e-12dbd41fdc67-kube-api-access-jqwpn\") pod \"auto-csr-approver-29537008-s49nx\" (UID: \"fdbe29ac-7373-4cf0-868e-12dbd41fdc67\") " pod="openshift-infra/auto-csr-approver-29537008-s49nx" Feb 27 19:28:00 crc kubenswrapper[4981]: I0227 19:28:00.385399 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqwpn\" (UniqueName: \"kubernetes.io/projected/fdbe29ac-7373-4cf0-868e-12dbd41fdc67-kube-api-access-jqwpn\") pod \"auto-csr-approver-29537008-s49nx\" (UID: \"fdbe29ac-7373-4cf0-868e-12dbd41fdc67\") " pod="openshift-infra/auto-csr-approver-29537008-s49nx" Feb 27 19:28:00 crc kubenswrapper[4981]: I0227 19:28:00.402900 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqwpn\" (UniqueName: \"kubernetes.io/projected/fdbe29ac-7373-4cf0-868e-12dbd41fdc67-kube-api-access-jqwpn\") pod \"auto-csr-approver-29537008-s49nx\" (UID: \"fdbe29ac-7373-4cf0-868e-12dbd41fdc67\") " pod="openshift-infra/auto-csr-approver-29537008-s49nx" Feb 27 19:28:00 crc kubenswrapper[4981]: I0227 19:28:00.461468 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537008-s49nx" Feb 27 19:28:00 crc kubenswrapper[4981]: I0227 19:28:00.862433 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537008-s49nx"] Feb 27 19:28:01 crc kubenswrapper[4981]: I0227 19:28:01.165680 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537008-s49nx" event={"ID":"fdbe29ac-7373-4cf0-868e-12dbd41fdc67","Type":"ContainerStarted","Data":"fefa729650f929959b5e5391719cba98614a7090c1300a8b455efa400c37bcef"} Feb 27 19:28:03 crc kubenswrapper[4981]: I0227 19:28:03.178319 4981 generic.go:334] "Generic (PLEG): container finished" podID="fdbe29ac-7373-4cf0-868e-12dbd41fdc67" containerID="d49458e56e5ecb2ecd9ebebfc078c66e25c5afc861f8380c58ba525ad31b98ae" exitCode=0 Feb 27 19:28:03 crc kubenswrapper[4981]: I0227 19:28:03.178368 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537008-s49nx" event={"ID":"fdbe29ac-7373-4cf0-868e-12dbd41fdc67","Type":"ContainerDied","Data":"d49458e56e5ecb2ecd9ebebfc078c66e25c5afc861f8380c58ba525ad31b98ae"} Feb 27 19:28:04 crc kubenswrapper[4981]: I0227 19:28:04.450492 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537008-s49nx" Feb 27 19:28:04 crc kubenswrapper[4981]: I0227 19:28:04.628856 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:28:04 crc kubenswrapper[4981]: E0227 19:28:04.629103 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:28:04 crc kubenswrapper[4981]: I0227 19:28:04.640443 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqwpn\" (UniqueName: \"kubernetes.io/projected/fdbe29ac-7373-4cf0-868e-12dbd41fdc67-kube-api-access-jqwpn\") pod \"fdbe29ac-7373-4cf0-868e-12dbd41fdc67\" (UID: \"fdbe29ac-7373-4cf0-868e-12dbd41fdc67\") " Feb 27 19:28:04 crc kubenswrapper[4981]: I0227 19:28:04.654403 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdbe29ac-7373-4cf0-868e-12dbd41fdc67-kube-api-access-jqwpn" (OuterVolumeSpecName: "kube-api-access-jqwpn") pod "fdbe29ac-7373-4cf0-868e-12dbd41fdc67" (UID: "fdbe29ac-7373-4cf0-868e-12dbd41fdc67"). InnerVolumeSpecName "kube-api-access-jqwpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:28:04 crc kubenswrapper[4981]: I0227 19:28:04.742685 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqwpn\" (UniqueName: \"kubernetes.io/projected/fdbe29ac-7373-4cf0-868e-12dbd41fdc67-kube-api-access-jqwpn\") on node \"crc\" DevicePath \"\"" Feb 27 19:28:05 crc kubenswrapper[4981]: I0227 19:28:05.193403 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537008-s49nx" event={"ID":"fdbe29ac-7373-4cf0-868e-12dbd41fdc67","Type":"ContainerDied","Data":"fefa729650f929959b5e5391719cba98614a7090c1300a8b455efa400c37bcef"} Feb 27 19:28:05 crc kubenswrapper[4981]: I0227 19:28:05.193445 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fefa729650f929959b5e5391719cba98614a7090c1300a8b455efa400c37bcef" Feb 27 19:28:05 crc kubenswrapper[4981]: I0227 19:28:05.193695 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537008-s49nx" Feb 27 19:28:05 crc kubenswrapper[4981]: I0227 19:28:05.521099 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537002-mc5hq"] Feb 27 19:28:05 crc kubenswrapper[4981]: I0227 19:28:05.526235 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537002-mc5hq"] Feb 27 19:28:05 crc kubenswrapper[4981]: I0227 19:28:05.637485 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4992e128-f31d-4501-9e9c-6967330dcaf1" path="/var/lib/kubelet/pods/4992e128-f31d-4501-9e9c-6967330dcaf1/volumes" Feb 27 19:28:16 crc kubenswrapper[4981]: I0227 19:28:16.628930 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:28:16 crc kubenswrapper[4981]: E0227 19:28:16.629675 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:28:23 crc kubenswrapper[4981]: I0227 19:28:23.017450 4981 scope.go:117] "RemoveContainer" containerID="e48ee52687794e937811e73f167a153be43d7d4660895d5a9bbb875ae8d14c1a" Feb 27 19:28:31 crc kubenswrapper[4981]: I0227 19:28:31.631879 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:28:31 crc kubenswrapper[4981]: E0227 19:28:31.632388 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:28:46 crc kubenswrapper[4981]: I0227 19:28:46.628583 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:28:46 crc kubenswrapper[4981]: E0227 19:28:46.629339 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:28:58 crc kubenswrapper[4981]: I0227 19:28:58.629516 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:28:59 crc kubenswrapper[4981]: I0227 19:28:59.549846 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerStarted","Data":"b01185e35f631fc4c627efbb37de556efc5840a81f915a452c6f124e1d4cf905"} Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.151394 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537010-7jn8n"] Feb 27 19:30:00 crc kubenswrapper[4981]: E0227 19:30:00.152533 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdbe29ac-7373-4cf0-868e-12dbd41fdc67" containerName="oc" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.152555 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdbe29ac-7373-4cf0-868e-12dbd41fdc67" containerName="oc" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.152795 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdbe29ac-7373-4cf0-868e-12dbd41fdc67" containerName="oc" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.153919 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-7jn8n" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.155853 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.156031 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.158268 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537010-7jn8n"] Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.242382 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537010-p6w2v"] Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.243665 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537010-p6w2v" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.248693 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.249078 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.249090 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.254769 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537010-p6w2v"] Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.326378 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6db480a-ce69-43c7-9e86-528ac6c69d1e-secret-volume\") pod \"collect-profiles-29537010-7jn8n\" (UID: \"a6db480a-ce69-43c7-9e86-528ac6c69d1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-7jn8n" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.326470 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pstfs\" (UniqueName: \"kubernetes.io/projected/a6db480a-ce69-43c7-9e86-528ac6c69d1e-kube-api-access-pstfs\") pod \"collect-profiles-29537010-7jn8n\" (UID: \"a6db480a-ce69-43c7-9e86-528ac6c69d1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-7jn8n" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.326522 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snw2v\" (UniqueName: \"kubernetes.io/projected/11d412f6-bf30-4199-be58-e966a4b3da09-kube-api-access-snw2v\") pod \"auto-csr-approver-29537010-p6w2v\" (UID: \"11d412f6-bf30-4199-be58-e966a4b3da09\") " pod="openshift-infra/auto-csr-approver-29537010-p6w2v" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.326571 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6db480a-ce69-43c7-9e86-528ac6c69d1e-config-volume\") pod \"collect-profiles-29537010-7jn8n\" (UID: \"a6db480a-ce69-43c7-9e86-528ac6c69d1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-7jn8n" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.427617 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6db480a-ce69-43c7-9e86-528ac6c69d1e-secret-volume\") pod \"collect-profiles-29537010-7jn8n\" (UID: \"a6db480a-ce69-43c7-9e86-528ac6c69d1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-7jn8n" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.427671 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pstfs\" (UniqueName: \"kubernetes.io/projected/a6db480a-ce69-43c7-9e86-528ac6c69d1e-kube-api-access-pstfs\") pod \"collect-profiles-29537010-7jn8n\" (UID: \"a6db480a-ce69-43c7-9e86-528ac6c69d1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-7jn8n" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.427717 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snw2v\" (UniqueName: \"kubernetes.io/projected/11d412f6-bf30-4199-be58-e966a4b3da09-kube-api-access-snw2v\") pod \"auto-csr-approver-29537010-p6w2v\" (UID: \"11d412f6-bf30-4199-be58-e966a4b3da09\") " pod="openshift-infra/auto-csr-approver-29537010-p6w2v" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.427754 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6db480a-ce69-43c7-9e86-528ac6c69d1e-config-volume\") pod \"collect-profiles-29537010-7jn8n\" (UID: \"a6db480a-ce69-43c7-9e86-528ac6c69d1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-7jn8n" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.428747 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6db480a-ce69-43c7-9e86-528ac6c69d1e-config-volume\") pod \"collect-profiles-29537010-7jn8n\" (UID: \"a6db480a-ce69-43c7-9e86-528ac6c69d1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-7jn8n" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.439950 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6db480a-ce69-43c7-9e86-528ac6c69d1e-secret-volume\") pod \"collect-profiles-29537010-7jn8n\" (UID: \"a6db480a-ce69-43c7-9e86-528ac6c69d1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-7jn8n" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.442390 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snw2v\" (UniqueName: \"kubernetes.io/projected/11d412f6-bf30-4199-be58-e966a4b3da09-kube-api-access-snw2v\") pod \"auto-csr-approver-29537010-p6w2v\" (UID: \"11d412f6-bf30-4199-be58-e966a4b3da09\") " pod="openshift-infra/auto-csr-approver-29537010-p6w2v" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.442528 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pstfs\" (UniqueName: \"kubernetes.io/projected/a6db480a-ce69-43c7-9e86-528ac6c69d1e-kube-api-access-pstfs\") pod \"collect-profiles-29537010-7jn8n\" (UID: \"a6db480a-ce69-43c7-9e86-528ac6c69d1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-7jn8n" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.471461 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-7jn8n" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.558374 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537010-p6w2v" Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.803966 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537010-p6w2v"] Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.871213 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537010-7jn8n"] Feb 27 19:30:00 crc kubenswrapper[4981]: W0227 19:30:00.875926 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6db480a_ce69_43c7_9e86_528ac6c69d1e.slice/crio-a7b9a7d30fd237ff70fd966ef591e8aa45adb8dac871b4e58b09266983f44b71 WatchSource:0}: Error finding container a7b9a7d30fd237ff70fd966ef591e8aa45adb8dac871b4e58b09266983f44b71: Status 404 returned error can't find the container with id a7b9a7d30fd237ff70fd966ef591e8aa45adb8dac871b4e58b09266983f44b71 Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.976012 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-7jn8n" event={"ID":"a6db480a-ce69-43c7-9e86-528ac6c69d1e","Type":"ContainerStarted","Data":"a7b9a7d30fd237ff70fd966ef591e8aa45adb8dac871b4e58b09266983f44b71"} Feb 27 19:30:00 crc kubenswrapper[4981]: I0227 19:30:00.977007 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537010-p6w2v" event={"ID":"11d412f6-bf30-4199-be58-e966a4b3da09","Type":"ContainerStarted","Data":"0b9f71205016339e00c3d1d8b913fb3d670220f4dabe9f51b45cdfe7b0b1fa86"} Feb 27 19:30:01 crc kubenswrapper[4981]: I0227 19:30:01.982781 4981 generic.go:334] "Generic (PLEG): container finished" podID="a6db480a-ce69-43c7-9e86-528ac6c69d1e" containerID="010f039d49b93958bf5a575d66a0061509d63fb2dcdfa6a084a6eefc69045d0e" exitCode=0 Feb 27 19:30:01 crc kubenswrapper[4981]: I0227 19:30:01.982925 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-7jn8n" event={"ID":"a6db480a-ce69-43c7-9e86-528ac6c69d1e","Type":"ContainerDied","Data":"010f039d49b93958bf5a575d66a0061509d63fb2dcdfa6a084a6eefc69045d0e"} Feb 27 19:30:01 crc kubenswrapper[4981]: I0227 19:30:01.984502 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537010-p6w2v" event={"ID":"11d412f6-bf30-4199-be58-e966a4b3da09","Type":"ContainerStarted","Data":"b63471a0bba7b8bc797a1282f73adb9991ae538beeeb89d2e40887c091459d43"} Feb 27 19:30:02 crc kubenswrapper[4981]: I0227 19:30:02.009958 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537010-p6w2v" podStartSLOduration=1.068802854 podStartE2EDuration="2.009940098s" podCreationTimestamp="2026-02-27 19:30:00 +0000 UTC" firstStartedPulling="2026-02-27 19:30:00.811380637 +0000 UTC m=+2700.290161797" lastFinishedPulling="2026-02-27 19:30:01.752517881 +0000 UTC m=+2701.231299041" observedRunningTime="2026-02-27 19:30:02.007625416 +0000 UTC m=+2701.486406576" watchObservedRunningTime="2026-02-27 19:30:02.009940098 +0000 UTC m=+2701.488721258" Feb 27 19:30:02 crc kubenswrapper[4981]: I0227 19:30:02.994499 4981 generic.go:334] "Generic (PLEG): container finished" podID="11d412f6-bf30-4199-be58-e966a4b3da09" containerID="b63471a0bba7b8bc797a1282f73adb9991ae538beeeb89d2e40887c091459d43" exitCode=0 Feb 27 19:30:02 crc kubenswrapper[4981]: I0227 19:30:02.994608 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537010-p6w2v" event={"ID":"11d412f6-bf30-4199-be58-e966a4b3da09","Type":"ContainerDied","Data":"b63471a0bba7b8bc797a1282f73adb9991ae538beeeb89d2e40887c091459d43"} Feb 27 19:30:03 crc kubenswrapper[4981]: I0227 19:30:03.302690 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-7jn8n" Feb 27 19:30:03 crc kubenswrapper[4981]: I0227 19:30:03.470906 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6db480a-ce69-43c7-9e86-528ac6c69d1e-config-volume\") pod \"a6db480a-ce69-43c7-9e86-528ac6c69d1e\" (UID: \"a6db480a-ce69-43c7-9e86-528ac6c69d1e\") " Feb 27 19:30:03 crc kubenswrapper[4981]: I0227 19:30:03.471008 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pstfs\" (UniqueName: \"kubernetes.io/projected/a6db480a-ce69-43c7-9e86-528ac6c69d1e-kube-api-access-pstfs\") pod \"a6db480a-ce69-43c7-9e86-528ac6c69d1e\" (UID: \"a6db480a-ce69-43c7-9e86-528ac6c69d1e\") " Feb 27 19:30:03 crc kubenswrapper[4981]: I0227 19:30:03.471298 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6db480a-ce69-43c7-9e86-528ac6c69d1e-secret-volume\") pod \"a6db480a-ce69-43c7-9e86-528ac6c69d1e\" (UID: \"a6db480a-ce69-43c7-9e86-528ac6c69d1e\") " Feb 27 19:30:03 crc kubenswrapper[4981]: I0227 19:30:03.471427 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6db480a-ce69-43c7-9e86-528ac6c69d1e-config-volume" (OuterVolumeSpecName: "config-volume") pod "a6db480a-ce69-43c7-9e86-528ac6c69d1e" (UID: "a6db480a-ce69-43c7-9e86-528ac6c69d1e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:30:03 crc kubenswrapper[4981]: I0227 19:30:03.471719 4981 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6db480a-ce69-43c7-9e86-528ac6c69d1e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 19:30:03 crc kubenswrapper[4981]: I0227 19:30:03.475869 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6db480a-ce69-43c7-9e86-528ac6c69d1e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a6db480a-ce69-43c7-9e86-528ac6c69d1e" (UID: "a6db480a-ce69-43c7-9e86-528ac6c69d1e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:30:03 crc kubenswrapper[4981]: I0227 19:30:03.476301 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6db480a-ce69-43c7-9e86-528ac6c69d1e-kube-api-access-pstfs" (OuterVolumeSpecName: "kube-api-access-pstfs") pod "a6db480a-ce69-43c7-9e86-528ac6c69d1e" (UID: "a6db480a-ce69-43c7-9e86-528ac6c69d1e"). InnerVolumeSpecName "kube-api-access-pstfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:30:03 crc kubenswrapper[4981]: I0227 19:30:03.573598 4981 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6db480a-ce69-43c7-9e86-528ac6c69d1e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 19:30:03 crc kubenswrapper[4981]: I0227 19:30:03.573643 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pstfs\" (UniqueName: \"kubernetes.io/projected/a6db480a-ce69-43c7-9e86-528ac6c69d1e-kube-api-access-pstfs\") on node \"crc\" DevicePath \"\"" Feb 27 19:30:04 crc kubenswrapper[4981]: I0227 19:30:04.002365 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-7jn8n" event={"ID":"a6db480a-ce69-43c7-9e86-528ac6c69d1e","Type":"ContainerDied","Data":"a7b9a7d30fd237ff70fd966ef591e8aa45adb8dac871b4e58b09266983f44b71"} Feb 27 19:30:04 crc kubenswrapper[4981]: I0227 19:30:04.002395 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-7jn8n" Feb 27 19:30:04 crc kubenswrapper[4981]: I0227 19:30:04.002409 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7b9a7d30fd237ff70fd966ef591e8aa45adb8dac871b4e58b09266983f44b71" Feb 27 19:30:04 crc kubenswrapper[4981]: I0227 19:30:04.220158 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537010-p6w2v" Feb 27 19:30:04 crc kubenswrapper[4981]: I0227 19:30:04.286912 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snw2v\" (UniqueName: \"kubernetes.io/projected/11d412f6-bf30-4199-be58-e966a4b3da09-kube-api-access-snw2v\") pod \"11d412f6-bf30-4199-be58-e966a4b3da09\" (UID: \"11d412f6-bf30-4199-be58-e966a4b3da09\") " Feb 27 19:30:04 crc kubenswrapper[4981]: I0227 19:30:04.292250 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11d412f6-bf30-4199-be58-e966a4b3da09-kube-api-access-snw2v" (OuterVolumeSpecName: "kube-api-access-snw2v") pod "11d412f6-bf30-4199-be58-e966a4b3da09" (UID: "11d412f6-bf30-4199-be58-e966a4b3da09"). InnerVolumeSpecName "kube-api-access-snw2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:30:04 crc kubenswrapper[4981]: I0227 19:30:04.365702 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2"] Feb 27 19:30:04 crc kubenswrapper[4981]: I0227 19:30:04.370142 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536965-kh5p2"] Feb 27 19:30:04 crc kubenswrapper[4981]: I0227 19:30:04.388027 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snw2v\" (UniqueName: \"kubernetes.io/projected/11d412f6-bf30-4199-be58-e966a4b3da09-kube-api-access-snw2v\") on node \"crc\" DevicePath \"\"" Feb 27 19:30:04 crc kubenswrapper[4981]: I0227 19:30:04.727148 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537004-rkvzd"] Feb 27 19:30:04 crc kubenswrapper[4981]: I0227 19:30:04.731457 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537004-rkvzd"] Feb 27 19:30:05 crc kubenswrapper[4981]: I0227 19:30:05.010314 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537010-p6w2v" event={"ID":"11d412f6-bf30-4199-be58-e966a4b3da09","Type":"ContainerDied","Data":"0b9f71205016339e00c3d1d8b913fb3d670220f4dabe9f51b45cdfe7b0b1fa86"} Feb 27 19:30:05 crc kubenswrapper[4981]: I0227 19:30:05.010353 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b9f71205016339e00c3d1d8b913fb3d670220f4dabe9f51b45cdfe7b0b1fa86" Feb 27 19:30:05 crc kubenswrapper[4981]: I0227 19:30:05.010447 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537010-p6w2v" Feb 27 19:30:05 crc kubenswrapper[4981]: I0227 19:30:05.638476 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26dd5ebe-6ac6-4201-8ee8-aeb7744f5154" path="/var/lib/kubelet/pods/26dd5ebe-6ac6-4201-8ee8-aeb7744f5154/volumes" Feb 27 19:30:05 crc kubenswrapper[4981]: I0227 19:30:05.640723 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7d035ce-f026-4668-9fca-c344f1fe60e3" path="/var/lib/kubelet/pods/b7d035ce-f026-4668-9fca-c344f1fe60e3/volumes" Feb 27 19:30:23 crc kubenswrapper[4981]: I0227 19:30:23.097406 4981 scope.go:117] "RemoveContainer" containerID="dc25501470957530ce00490314bfa4ed1be9479a98dcf7441185058c5e491fb0" Feb 27 19:30:23 crc kubenswrapper[4981]: I0227 19:30:23.134723 4981 scope.go:117] "RemoveContainer" containerID="683f3f2f0021bfd989263a07a7591bc65893d2e805b7474481414b3ad12cfc72" Feb 27 19:31:20 crc kubenswrapper[4981]: I0227 19:31:20.248650 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:31:20 crc kubenswrapper[4981]: I0227 19:31:20.250099 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:31:50 crc kubenswrapper[4981]: I0227 19:31:50.249375 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:31:50 crc kubenswrapper[4981]: I0227 19:31:50.249936 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:32:00 crc kubenswrapper[4981]: I0227 19:32:00.136853 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537012-9r2bh"] Feb 27 19:32:00 crc kubenswrapper[4981]: E0227 19:32:00.137711 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d412f6-bf30-4199-be58-e966a4b3da09" containerName="oc" Feb 27 19:32:00 crc kubenswrapper[4981]: I0227 19:32:00.137726 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d412f6-bf30-4199-be58-e966a4b3da09" containerName="oc" Feb 27 19:32:00 crc kubenswrapper[4981]: E0227 19:32:00.137748 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6db480a-ce69-43c7-9e86-528ac6c69d1e" containerName="collect-profiles" Feb 27 19:32:00 crc kubenswrapper[4981]: I0227 19:32:00.137754 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6db480a-ce69-43c7-9e86-528ac6c69d1e" containerName="collect-profiles" Feb 27 19:32:00 crc kubenswrapper[4981]: I0227 19:32:00.137897 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6db480a-ce69-43c7-9e86-528ac6c69d1e" containerName="collect-profiles" Feb 27 19:32:00 crc kubenswrapper[4981]: I0227 19:32:00.137916 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="11d412f6-bf30-4199-be58-e966a4b3da09" containerName="oc" Feb 27 19:32:00 crc kubenswrapper[4981]: I0227 19:32:00.138354 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537012-9r2bh" Feb 27 19:32:00 crc kubenswrapper[4981]: I0227 19:32:00.142522 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:32:00 crc kubenswrapper[4981]: I0227 19:32:00.142905 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:32:00 crc kubenswrapper[4981]: I0227 19:32:00.142637 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:32:00 crc kubenswrapper[4981]: I0227 19:32:00.147543 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537012-9r2bh"] Feb 27 19:32:00 crc kubenswrapper[4981]: I0227 19:32:00.250391 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9fm9\" (UniqueName: \"kubernetes.io/projected/16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833-kube-api-access-p9fm9\") pod \"auto-csr-approver-29537012-9r2bh\" (UID: \"16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833\") " pod="openshift-infra/auto-csr-approver-29537012-9r2bh" Feb 27 19:32:00 crc kubenswrapper[4981]: I0227 19:32:00.352562 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9fm9\" (UniqueName: \"kubernetes.io/projected/16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833-kube-api-access-p9fm9\") pod \"auto-csr-approver-29537012-9r2bh\" (UID: \"16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833\") " pod="openshift-infra/auto-csr-approver-29537012-9r2bh" Feb 27 19:32:00 crc kubenswrapper[4981]: I0227 19:32:00.370471 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9fm9\" (UniqueName: \"kubernetes.io/projected/16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833-kube-api-access-p9fm9\") pod \"auto-csr-approver-29537012-9r2bh\" (UID: \"16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833\") " pod="openshift-infra/auto-csr-approver-29537012-9r2bh" Feb 27 19:32:00 crc kubenswrapper[4981]: I0227 19:32:00.456565 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537012-9r2bh" Feb 27 19:32:00 crc kubenswrapper[4981]: I0227 19:32:00.859600 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537012-9r2bh"] Feb 27 19:32:00 crc kubenswrapper[4981]: I0227 19:32:00.871705 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:32:01 crc kubenswrapper[4981]: I0227 19:32:01.183123 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537012-9r2bh" event={"ID":"16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833","Type":"ContainerStarted","Data":"de2d426b8892280655efbb6544a6550d5107d8027850bb6565c217215c7d0aa3"} Feb 27 19:32:09 crc kubenswrapper[4981]: I0227 19:32:09.237777 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537012-9r2bh" event={"ID":"16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833","Type":"ContainerStarted","Data":"13f255d9dbe7493ed50af7be68beed69e6168aa161f058528461485d341393c5"} Feb 27 19:32:09 crc kubenswrapper[4981]: I0227 19:32:09.249138 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537012-9r2bh" podStartSLOduration=1.401281471 podStartE2EDuration="9.249120037s" podCreationTimestamp="2026-02-27 19:32:00 +0000 UTC" firstStartedPulling="2026-02-27 19:32:00.871250478 +0000 UTC m=+2820.350031648" lastFinishedPulling="2026-02-27 19:32:08.719089054 +0000 UTC m=+2828.197870214" observedRunningTime="2026-02-27 19:32:09.248316651 +0000 UTC m=+2828.727097811" watchObservedRunningTime="2026-02-27 19:32:09.249120037 +0000 UTC m=+2828.727901197" Feb 27 19:32:10 crc kubenswrapper[4981]: I0227 19:32:10.245104 4981 generic.go:334] "Generic (PLEG): container finished" podID="16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833" containerID="13f255d9dbe7493ed50af7be68beed69e6168aa161f058528461485d341393c5" exitCode=0 Feb 27 19:32:10 crc kubenswrapper[4981]: I0227 19:32:10.245144 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537012-9r2bh" event={"ID":"16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833","Type":"ContainerDied","Data":"13f255d9dbe7493ed50af7be68beed69e6168aa161f058528461485d341393c5"} Feb 27 19:32:11 crc kubenswrapper[4981]: I0227 19:32:11.489499 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537012-9r2bh" Feb 27 19:32:11 crc kubenswrapper[4981]: I0227 19:32:11.603684 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9fm9\" (UniqueName: \"kubernetes.io/projected/16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833-kube-api-access-p9fm9\") pod \"16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833\" (UID: \"16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833\") " Feb 27 19:32:11 crc kubenswrapper[4981]: I0227 19:32:11.610073 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833-kube-api-access-p9fm9" (OuterVolumeSpecName: "kube-api-access-p9fm9") pod "16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833" (UID: "16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833"). InnerVolumeSpecName "kube-api-access-p9fm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:32:11 crc kubenswrapper[4981]: I0227 19:32:11.705494 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9fm9\" (UniqueName: \"kubernetes.io/projected/16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833-kube-api-access-p9fm9\") on node \"crc\" DevicePath \"\"" Feb 27 19:32:12 crc kubenswrapper[4981]: I0227 19:32:12.258084 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537012-9r2bh" event={"ID":"16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833","Type":"ContainerDied","Data":"de2d426b8892280655efbb6544a6550d5107d8027850bb6565c217215c7d0aa3"} Feb 27 19:32:12 crc kubenswrapper[4981]: I0227 19:32:12.258122 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de2d426b8892280655efbb6544a6550d5107d8027850bb6565c217215c7d0aa3" Feb 27 19:32:12 crc kubenswrapper[4981]: I0227 19:32:12.258144 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537012-9r2bh" Feb 27 19:32:12 crc kubenswrapper[4981]: I0227 19:32:12.312751 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537006-rzdzv"] Feb 27 19:32:12 crc kubenswrapper[4981]: I0227 19:32:12.317382 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537006-rzdzv"] Feb 27 19:32:13 crc kubenswrapper[4981]: I0227 19:32:13.641505 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8a1b68a-07b5-44d6-b47a-654d9823e2a2" path="/var/lib/kubelet/pods/e8a1b68a-07b5-44d6-b47a-654d9823e2a2/volumes" Feb 27 19:32:20 crc kubenswrapper[4981]: I0227 19:32:20.248966 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:32:20 crc kubenswrapper[4981]: I0227 19:32:20.249344 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:32:20 crc kubenswrapper[4981]: I0227 19:32:20.249394 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 19:32:20 crc kubenswrapper[4981]: I0227 19:32:20.249962 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b01185e35f631fc4c627efbb37de556efc5840a81f915a452c6f124e1d4cf905"} pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:32:20 crc kubenswrapper[4981]: I0227 19:32:20.250025 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" containerID="cri-o://b01185e35f631fc4c627efbb37de556efc5840a81f915a452c6f124e1d4cf905" gracePeriod=600 Feb 27 19:32:21 crc kubenswrapper[4981]: I0227 19:32:21.316596 4981 generic.go:334] "Generic (PLEG): container finished" podID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerID="b01185e35f631fc4c627efbb37de556efc5840a81f915a452c6f124e1d4cf905" exitCode=0 Feb 27 19:32:21 crc kubenswrapper[4981]: I0227 19:32:21.316660 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerDied","Data":"b01185e35f631fc4c627efbb37de556efc5840a81f915a452c6f124e1d4cf905"} Feb 27 19:32:21 crc kubenswrapper[4981]: I0227 19:32:21.317168 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerStarted","Data":"ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78"} Feb 27 19:32:21 crc kubenswrapper[4981]: I0227 19:32:21.317197 4981 scope.go:117] "RemoveContainer" containerID="d76eeabb60adc328a98e52170330d67ea923755f541843bc1db23b60e41d99d6" Feb 27 19:32:23 crc kubenswrapper[4981]: I0227 19:32:23.200919 4981 scope.go:117] "RemoveContainer" containerID="052950dd787b95cfeefdbf43bc5fa5522687c017c9a6ff66f8b38174264e5a74" Feb 27 19:32:38 crc kubenswrapper[4981]: I0227 19:32:38.693686 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x4tgq"] Feb 27 19:32:38 crc kubenswrapper[4981]: E0227 19:32:38.694715 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833" containerName="oc" Feb 27 19:32:38 crc kubenswrapper[4981]: I0227 19:32:38.694736 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833" containerName="oc" Feb 27 19:32:38 crc kubenswrapper[4981]: I0227 19:32:38.694966 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833" containerName="oc" Feb 27 19:32:38 crc kubenswrapper[4981]: I0227 19:32:38.696079 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4tgq" Feb 27 19:32:38 crc kubenswrapper[4981]: I0227 19:32:38.706136 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x4tgq"] Feb 27 19:32:38 crc kubenswrapper[4981]: I0227 19:32:38.788302 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d66405-bc74-4b8f-b1a4-d7dcf554adf7-utilities\") pod \"community-operators-x4tgq\" (UID: \"06d66405-bc74-4b8f-b1a4-d7dcf554adf7\") " pod="openshift-marketplace/community-operators-x4tgq" Feb 27 19:32:38 crc kubenswrapper[4981]: I0227 19:32:38.788409 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d66405-bc74-4b8f-b1a4-d7dcf554adf7-catalog-content\") pod \"community-operators-x4tgq\" (UID: \"06d66405-bc74-4b8f-b1a4-d7dcf554adf7\") " pod="openshift-marketplace/community-operators-x4tgq" Feb 27 19:32:38 crc kubenswrapper[4981]: I0227 19:32:38.788464 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl8l6\" (UniqueName: \"kubernetes.io/projected/06d66405-bc74-4b8f-b1a4-d7dcf554adf7-kube-api-access-gl8l6\") pod \"community-operators-x4tgq\" (UID: \"06d66405-bc74-4b8f-b1a4-d7dcf554adf7\") " pod="openshift-marketplace/community-operators-x4tgq" Feb 27 19:32:38 crc kubenswrapper[4981]: I0227 19:32:38.890267 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl8l6\" (UniqueName: \"kubernetes.io/projected/06d66405-bc74-4b8f-b1a4-d7dcf554adf7-kube-api-access-gl8l6\") pod \"community-operators-x4tgq\" (UID: \"06d66405-bc74-4b8f-b1a4-d7dcf554adf7\") " pod="openshift-marketplace/community-operators-x4tgq" Feb 27 19:32:38 crc kubenswrapper[4981]: I0227 19:32:38.890357 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d66405-bc74-4b8f-b1a4-d7dcf554adf7-utilities\") pod \"community-operators-x4tgq\" (UID: \"06d66405-bc74-4b8f-b1a4-d7dcf554adf7\") " pod="openshift-marketplace/community-operators-x4tgq" Feb 27 19:32:38 crc kubenswrapper[4981]: I0227 19:32:38.890415 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d66405-bc74-4b8f-b1a4-d7dcf554adf7-catalog-content\") pod \"community-operators-x4tgq\" (UID: \"06d66405-bc74-4b8f-b1a4-d7dcf554adf7\") " pod="openshift-marketplace/community-operators-x4tgq" Feb 27 19:32:38 crc kubenswrapper[4981]: I0227 19:32:38.890842 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d66405-bc74-4b8f-b1a4-d7dcf554adf7-catalog-content\") pod \"community-operators-x4tgq\" (UID: \"06d66405-bc74-4b8f-b1a4-d7dcf554adf7\") " pod="openshift-marketplace/community-operators-x4tgq" Feb 27 19:32:38 crc kubenswrapper[4981]: I0227 19:32:38.891432 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d66405-bc74-4b8f-b1a4-d7dcf554adf7-utilities\") pod \"community-operators-x4tgq\" (UID: \"06d66405-bc74-4b8f-b1a4-d7dcf554adf7\") " pod="openshift-marketplace/community-operators-x4tgq" Feb 27 19:32:38 crc kubenswrapper[4981]: I0227 19:32:38.910580 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl8l6\" (UniqueName: \"kubernetes.io/projected/06d66405-bc74-4b8f-b1a4-d7dcf554adf7-kube-api-access-gl8l6\") pod \"community-operators-x4tgq\" (UID: \"06d66405-bc74-4b8f-b1a4-d7dcf554adf7\") " pod="openshift-marketplace/community-operators-x4tgq" Feb 27 19:32:39 crc kubenswrapper[4981]: I0227 19:32:39.047257 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4tgq" Feb 27 19:32:39 crc kubenswrapper[4981]: I0227 19:32:39.568964 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x4tgq"] Feb 27 19:32:40 crc kubenswrapper[4981]: I0227 19:32:40.436132 4981 generic.go:334] "Generic (PLEG): container finished" podID="06d66405-bc74-4b8f-b1a4-d7dcf554adf7" containerID="4751a1abb0bb268c93b0a3b5067a32e896431a72eaea0508ae0b85f13aaf328f" exitCode=0 Feb 27 19:32:40 crc kubenswrapper[4981]: I0227 19:32:40.436233 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4tgq" event={"ID":"06d66405-bc74-4b8f-b1a4-d7dcf554adf7","Type":"ContainerDied","Data":"4751a1abb0bb268c93b0a3b5067a32e896431a72eaea0508ae0b85f13aaf328f"} Feb 27 19:32:40 crc kubenswrapper[4981]: I0227 19:32:40.436522 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4tgq" event={"ID":"06d66405-bc74-4b8f-b1a4-d7dcf554adf7","Type":"ContainerStarted","Data":"f1502429dae8e17ec4e39eae091aac4a5ec777006f1346c1bce01d6fcbbebd0c"} Feb 27 19:32:41 crc kubenswrapper[4981]: I0227 19:32:41.444809 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4tgq" event={"ID":"06d66405-bc74-4b8f-b1a4-d7dcf554adf7","Type":"ContainerStarted","Data":"49878f49f3dc48bec8dd4213b8903f904f62d1d170f263a8f3874b76f67a82ec"} Feb 27 19:32:42 crc kubenswrapper[4981]: I0227 19:32:42.453242 4981 generic.go:334] "Generic (PLEG): container finished" podID="06d66405-bc74-4b8f-b1a4-d7dcf554adf7" containerID="49878f49f3dc48bec8dd4213b8903f904f62d1d170f263a8f3874b76f67a82ec" exitCode=0 Feb 27 19:32:42 crc kubenswrapper[4981]: I0227 19:32:42.453290 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4tgq" event={"ID":"06d66405-bc74-4b8f-b1a4-d7dcf554adf7","Type":"ContainerDied","Data":"49878f49f3dc48bec8dd4213b8903f904f62d1d170f263a8f3874b76f67a82ec"} Feb 27 19:32:43 crc kubenswrapper[4981]: I0227 19:32:43.462746 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4tgq" event={"ID":"06d66405-bc74-4b8f-b1a4-d7dcf554adf7","Type":"ContainerStarted","Data":"590afcdebb8767711f1db87d04e1ab9de17b77dff0f21bc0b4c67235370c3402"} Feb 27 19:32:43 crc kubenswrapper[4981]: I0227 19:32:43.486233 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x4tgq" podStartSLOduration=3.009228369 podStartE2EDuration="5.486218143s" podCreationTimestamp="2026-02-27 19:32:38 +0000 UTC" firstStartedPulling="2026-02-27 19:32:40.437420762 +0000 UTC m=+2859.916201922" lastFinishedPulling="2026-02-27 19:32:42.914410536 +0000 UTC m=+2862.393191696" observedRunningTime="2026-02-27 19:32:43.478496772 +0000 UTC m=+2862.957277942" watchObservedRunningTime="2026-02-27 19:32:43.486218143 +0000 UTC m=+2862.964999303" Feb 27 19:32:49 crc kubenswrapper[4981]: I0227 19:32:49.047797 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x4tgq" Feb 27 19:32:49 crc kubenswrapper[4981]: I0227 19:32:49.048357 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x4tgq" Feb 27 19:32:49 crc kubenswrapper[4981]: I0227 19:32:49.090629 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x4tgq" Feb 27 19:32:49 crc kubenswrapper[4981]: I0227 19:32:49.541283 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x4tgq" Feb 27 19:32:49 crc kubenswrapper[4981]: I0227 19:32:49.581656 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x4tgq"] Feb 27 19:32:51 crc kubenswrapper[4981]: I0227 19:32:51.512137 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x4tgq" podUID="06d66405-bc74-4b8f-b1a4-d7dcf554adf7" containerName="registry-server" containerID="cri-o://590afcdebb8767711f1db87d04e1ab9de17b77dff0f21bc0b4c67235370c3402" gracePeriod=2 Feb 27 19:32:51 crc kubenswrapper[4981]: I0227 19:32:51.868670 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4tgq" Feb 27 19:32:51 crc kubenswrapper[4981]: I0227 19:32:51.973313 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl8l6\" (UniqueName: \"kubernetes.io/projected/06d66405-bc74-4b8f-b1a4-d7dcf554adf7-kube-api-access-gl8l6\") pod \"06d66405-bc74-4b8f-b1a4-d7dcf554adf7\" (UID: \"06d66405-bc74-4b8f-b1a4-d7dcf554adf7\") " Feb 27 19:32:51 crc kubenswrapper[4981]: I0227 19:32:51.973362 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d66405-bc74-4b8f-b1a4-d7dcf554adf7-catalog-content\") pod \"06d66405-bc74-4b8f-b1a4-d7dcf554adf7\" (UID: \"06d66405-bc74-4b8f-b1a4-d7dcf554adf7\") " Feb 27 19:32:51 crc kubenswrapper[4981]: I0227 19:32:51.973532 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d66405-bc74-4b8f-b1a4-d7dcf554adf7-utilities\") pod \"06d66405-bc74-4b8f-b1a4-d7dcf554adf7\" (UID: \"06d66405-bc74-4b8f-b1a4-d7dcf554adf7\") " Feb 27 19:32:51 crc kubenswrapper[4981]: I0227 19:32:51.974623 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06d66405-bc74-4b8f-b1a4-d7dcf554adf7-utilities" (OuterVolumeSpecName: "utilities") pod "06d66405-bc74-4b8f-b1a4-d7dcf554adf7" (UID: "06d66405-bc74-4b8f-b1a4-d7dcf554adf7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:32:51 crc kubenswrapper[4981]: I0227 19:32:51.979022 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06d66405-bc74-4b8f-b1a4-d7dcf554adf7-kube-api-access-gl8l6" (OuterVolumeSpecName: "kube-api-access-gl8l6") pod "06d66405-bc74-4b8f-b1a4-d7dcf554adf7" (UID: "06d66405-bc74-4b8f-b1a4-d7dcf554adf7"). InnerVolumeSpecName "kube-api-access-gl8l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:32:52 crc kubenswrapper[4981]: I0227 19:32:52.075581 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d66405-bc74-4b8f-b1a4-d7dcf554adf7-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:32:52 crc kubenswrapper[4981]: I0227 19:32:52.075642 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl8l6\" (UniqueName: \"kubernetes.io/projected/06d66405-bc74-4b8f-b1a4-d7dcf554adf7-kube-api-access-gl8l6\") on node \"crc\" DevicePath \"\"" Feb 27 19:32:52 crc kubenswrapper[4981]: I0227 19:32:52.297683 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06d66405-bc74-4b8f-b1a4-d7dcf554adf7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06d66405-bc74-4b8f-b1a4-d7dcf554adf7" (UID: "06d66405-bc74-4b8f-b1a4-d7dcf554adf7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:32:52 crc kubenswrapper[4981]: I0227 19:32:52.379532 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d66405-bc74-4b8f-b1a4-d7dcf554adf7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:32:52 crc kubenswrapper[4981]: I0227 19:32:52.521089 4981 generic.go:334] "Generic (PLEG): container finished" podID="06d66405-bc74-4b8f-b1a4-d7dcf554adf7" containerID="590afcdebb8767711f1db87d04e1ab9de17b77dff0f21bc0b4c67235370c3402" exitCode=0 Feb 27 19:32:52 crc kubenswrapper[4981]: I0227 19:32:52.521137 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x4tgq" Feb 27 19:32:52 crc kubenswrapper[4981]: I0227 19:32:52.521138 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4tgq" event={"ID":"06d66405-bc74-4b8f-b1a4-d7dcf554adf7","Type":"ContainerDied","Data":"590afcdebb8767711f1db87d04e1ab9de17b77dff0f21bc0b4c67235370c3402"} Feb 27 19:32:52 crc kubenswrapper[4981]: I0227 19:32:52.521284 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x4tgq" event={"ID":"06d66405-bc74-4b8f-b1a4-d7dcf554adf7","Type":"ContainerDied","Data":"f1502429dae8e17ec4e39eae091aac4a5ec777006f1346c1bce01d6fcbbebd0c"} Feb 27 19:32:52 crc kubenswrapper[4981]: I0227 19:32:52.521330 4981 scope.go:117] "RemoveContainer" containerID="590afcdebb8767711f1db87d04e1ab9de17b77dff0f21bc0b4c67235370c3402" Feb 27 19:32:52 crc kubenswrapper[4981]: I0227 19:32:52.550009 4981 scope.go:117] "RemoveContainer" containerID="49878f49f3dc48bec8dd4213b8903f904f62d1d170f263a8f3874b76f67a82ec" Feb 27 19:32:52 crc kubenswrapper[4981]: I0227 19:32:52.559215 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x4tgq"] Feb 27 19:32:52 crc kubenswrapper[4981]: I0227 19:32:52.566647 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x4tgq"] Feb 27 19:32:52 crc kubenswrapper[4981]: I0227 19:32:52.573598 4981 scope.go:117] "RemoveContainer" containerID="4751a1abb0bb268c93b0a3b5067a32e896431a72eaea0508ae0b85f13aaf328f" Feb 27 19:32:52 crc kubenswrapper[4981]: I0227 19:32:52.592734 4981 scope.go:117] "RemoveContainer" containerID="590afcdebb8767711f1db87d04e1ab9de17b77dff0f21bc0b4c67235370c3402" Feb 27 19:32:52 crc kubenswrapper[4981]: E0227 19:32:52.593475 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"590afcdebb8767711f1db87d04e1ab9de17b77dff0f21bc0b4c67235370c3402\": container with ID starting with 590afcdebb8767711f1db87d04e1ab9de17b77dff0f21bc0b4c67235370c3402 not found: ID does not exist" containerID="590afcdebb8767711f1db87d04e1ab9de17b77dff0f21bc0b4c67235370c3402" Feb 27 19:32:52 crc kubenswrapper[4981]: I0227 19:32:52.593515 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"590afcdebb8767711f1db87d04e1ab9de17b77dff0f21bc0b4c67235370c3402"} err="failed to get container status \"590afcdebb8767711f1db87d04e1ab9de17b77dff0f21bc0b4c67235370c3402\": rpc error: code = NotFound desc = could not find container \"590afcdebb8767711f1db87d04e1ab9de17b77dff0f21bc0b4c67235370c3402\": container with ID starting with 590afcdebb8767711f1db87d04e1ab9de17b77dff0f21bc0b4c67235370c3402 not found: ID does not exist" Feb 27 19:32:52 crc kubenswrapper[4981]: I0227 19:32:52.593540 4981 scope.go:117] "RemoveContainer" containerID="49878f49f3dc48bec8dd4213b8903f904f62d1d170f263a8f3874b76f67a82ec" Feb 27 19:32:52 crc kubenswrapper[4981]: E0227 19:32:52.593862 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49878f49f3dc48bec8dd4213b8903f904f62d1d170f263a8f3874b76f67a82ec\": container with ID starting with 49878f49f3dc48bec8dd4213b8903f904f62d1d170f263a8f3874b76f67a82ec not found: ID does not exist" containerID="49878f49f3dc48bec8dd4213b8903f904f62d1d170f263a8f3874b76f67a82ec" Feb 27 19:32:52 crc kubenswrapper[4981]: I0227 19:32:52.593894 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49878f49f3dc48bec8dd4213b8903f904f62d1d170f263a8f3874b76f67a82ec"} err="failed to get container status \"49878f49f3dc48bec8dd4213b8903f904f62d1d170f263a8f3874b76f67a82ec\": rpc error: code = NotFound desc = could not find container \"49878f49f3dc48bec8dd4213b8903f904f62d1d170f263a8f3874b76f67a82ec\": container with ID starting with 49878f49f3dc48bec8dd4213b8903f904f62d1d170f263a8f3874b76f67a82ec not found: ID does not exist" Feb 27 19:32:52 crc kubenswrapper[4981]: I0227 19:32:52.593916 4981 scope.go:117] "RemoveContainer" containerID="4751a1abb0bb268c93b0a3b5067a32e896431a72eaea0508ae0b85f13aaf328f" Feb 27 19:32:52 crc kubenswrapper[4981]: E0227 19:32:52.594226 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4751a1abb0bb268c93b0a3b5067a32e896431a72eaea0508ae0b85f13aaf328f\": container with ID starting with 4751a1abb0bb268c93b0a3b5067a32e896431a72eaea0508ae0b85f13aaf328f not found: ID does not exist" containerID="4751a1abb0bb268c93b0a3b5067a32e896431a72eaea0508ae0b85f13aaf328f" Feb 27 19:32:52 crc kubenswrapper[4981]: I0227 19:32:52.594287 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4751a1abb0bb268c93b0a3b5067a32e896431a72eaea0508ae0b85f13aaf328f"} err="failed to get container status \"4751a1abb0bb268c93b0a3b5067a32e896431a72eaea0508ae0b85f13aaf328f\": rpc error: code = NotFound desc = could not find container \"4751a1abb0bb268c93b0a3b5067a32e896431a72eaea0508ae0b85f13aaf328f\": container with ID starting with 4751a1abb0bb268c93b0a3b5067a32e896431a72eaea0508ae0b85f13aaf328f not found: ID does not exist" Feb 27 19:32:53 crc kubenswrapper[4981]: I0227 19:32:53.636994 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06d66405-bc74-4b8f-b1a4-d7dcf554adf7" path="/var/lib/kubelet/pods/06d66405-bc74-4b8f-b1a4-d7dcf554adf7/volumes" Feb 27 19:33:31 crc kubenswrapper[4981]: I0227 19:33:31.545490 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-466cr"] Feb 27 19:33:31 crc kubenswrapper[4981]: E0227 19:33:31.553415 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06d66405-bc74-4b8f-b1a4-d7dcf554adf7" containerName="extract-utilities" Feb 27 19:33:31 crc kubenswrapper[4981]: I0227 19:33:31.553453 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d66405-bc74-4b8f-b1a4-d7dcf554adf7" containerName="extract-utilities" Feb 27 19:33:31 crc kubenswrapper[4981]: E0227 19:33:31.553503 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06d66405-bc74-4b8f-b1a4-d7dcf554adf7" containerName="registry-server" Feb 27 19:33:31 crc kubenswrapper[4981]: I0227 19:33:31.553511 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d66405-bc74-4b8f-b1a4-d7dcf554adf7" containerName="registry-server" Feb 27 19:33:31 crc kubenswrapper[4981]: E0227 19:33:31.553529 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06d66405-bc74-4b8f-b1a4-d7dcf554adf7" containerName="extract-content" Feb 27 19:33:31 crc kubenswrapper[4981]: I0227 19:33:31.553537 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d66405-bc74-4b8f-b1a4-d7dcf554adf7" containerName="extract-content" Feb 27 19:33:31 crc kubenswrapper[4981]: I0227 19:33:31.554703 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="06d66405-bc74-4b8f-b1a4-d7dcf554adf7" containerName="registry-server" Feb 27 19:33:31 crc kubenswrapper[4981]: I0227 19:33:31.563171 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-466cr" Feb 27 19:33:31 crc kubenswrapper[4981]: I0227 19:33:31.568982 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-466cr"] Feb 27 19:33:31 crc kubenswrapper[4981]: I0227 19:33:31.599358 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wxkl\" (UniqueName: \"kubernetes.io/projected/65e28264-0563-4ebc-9d23-5917d83b5a26-kube-api-access-6wxkl\") pod \"redhat-marketplace-466cr\" (UID: \"65e28264-0563-4ebc-9d23-5917d83b5a26\") " pod="openshift-marketplace/redhat-marketplace-466cr" Feb 27 19:33:31 crc kubenswrapper[4981]: I0227 19:33:31.599493 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e28264-0563-4ebc-9d23-5917d83b5a26-utilities\") pod \"redhat-marketplace-466cr\" (UID: \"65e28264-0563-4ebc-9d23-5917d83b5a26\") " pod="openshift-marketplace/redhat-marketplace-466cr" Feb 27 19:33:31 crc kubenswrapper[4981]: I0227 19:33:31.599518 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e28264-0563-4ebc-9d23-5917d83b5a26-catalog-content\") pod \"redhat-marketplace-466cr\" (UID: \"65e28264-0563-4ebc-9d23-5917d83b5a26\") " pod="openshift-marketplace/redhat-marketplace-466cr" Feb 27 19:33:31 crc kubenswrapper[4981]: I0227 19:33:31.700890 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wxkl\" (UniqueName: \"kubernetes.io/projected/65e28264-0563-4ebc-9d23-5917d83b5a26-kube-api-access-6wxkl\") pod \"redhat-marketplace-466cr\" (UID: \"65e28264-0563-4ebc-9d23-5917d83b5a26\") " pod="openshift-marketplace/redhat-marketplace-466cr" Feb 27 19:33:31 crc kubenswrapper[4981]: I0227 19:33:31.701008 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e28264-0563-4ebc-9d23-5917d83b5a26-utilities\") pod \"redhat-marketplace-466cr\" (UID: \"65e28264-0563-4ebc-9d23-5917d83b5a26\") " pod="openshift-marketplace/redhat-marketplace-466cr" Feb 27 19:33:31 crc kubenswrapper[4981]: I0227 19:33:31.701028 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e28264-0563-4ebc-9d23-5917d83b5a26-catalog-content\") pod \"redhat-marketplace-466cr\" (UID: \"65e28264-0563-4ebc-9d23-5917d83b5a26\") " pod="openshift-marketplace/redhat-marketplace-466cr" Feb 27 19:33:31 crc kubenswrapper[4981]: I0227 19:33:31.701644 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e28264-0563-4ebc-9d23-5917d83b5a26-utilities\") pod \"redhat-marketplace-466cr\" (UID: \"65e28264-0563-4ebc-9d23-5917d83b5a26\") " pod="openshift-marketplace/redhat-marketplace-466cr" Feb 27 19:33:31 crc kubenswrapper[4981]: I0227 19:33:31.701914 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e28264-0563-4ebc-9d23-5917d83b5a26-catalog-content\") pod \"redhat-marketplace-466cr\" (UID: \"65e28264-0563-4ebc-9d23-5917d83b5a26\") " pod="openshift-marketplace/redhat-marketplace-466cr" Feb 27 19:33:31 crc kubenswrapper[4981]: I0227 19:33:31.725103 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wxkl\" (UniqueName: \"kubernetes.io/projected/65e28264-0563-4ebc-9d23-5917d83b5a26-kube-api-access-6wxkl\") pod \"redhat-marketplace-466cr\" (UID: \"65e28264-0563-4ebc-9d23-5917d83b5a26\") " pod="openshift-marketplace/redhat-marketplace-466cr" Feb 27 19:33:31 crc kubenswrapper[4981]: I0227 19:33:31.887331 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-466cr" Feb 27 19:33:32 crc kubenswrapper[4981]: I0227 19:33:32.319524 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-466cr"] Feb 27 19:33:32 crc kubenswrapper[4981]: I0227 19:33:32.770288 4981 generic.go:334] "Generic (PLEG): container finished" podID="65e28264-0563-4ebc-9d23-5917d83b5a26" containerID="996e8c5321ed85cabe45353fa2b33ca818c39eb6f8d8edff6d8195b10aba2db8" exitCode=0 Feb 27 19:33:32 crc kubenswrapper[4981]: I0227 19:33:32.770352 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-466cr" event={"ID":"65e28264-0563-4ebc-9d23-5917d83b5a26","Type":"ContainerDied","Data":"996e8c5321ed85cabe45353fa2b33ca818c39eb6f8d8edff6d8195b10aba2db8"} Feb 27 19:33:32 crc kubenswrapper[4981]: I0227 19:33:32.770403 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-466cr" event={"ID":"65e28264-0563-4ebc-9d23-5917d83b5a26","Type":"ContainerStarted","Data":"e035a0fb3eb0649affd191b7fa3d7e6ff446838cb7ba3f0c3dbff6c7932edc44"} Feb 27 19:33:33 crc kubenswrapper[4981]: E0227 19:33:33.380628 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 19:33:33 crc kubenswrapper[4981]: E0227 19:33:33.381119 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6wxkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-466cr_openshift-marketplace(65e28264-0563-4ebc-9d23-5917d83b5a26): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:33:33 crc kubenswrapper[4981]: E0227 19:33:33.382618 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-466cr" podUID="65e28264-0563-4ebc-9d23-5917d83b5a26" Feb 27 19:33:33 crc kubenswrapper[4981]: E0227 19:33:33.778332 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-466cr" podUID="65e28264-0563-4ebc-9d23-5917d83b5a26" Feb 27 19:33:47 crc kubenswrapper[4981]: I0227 19:33:47.860988 4981 generic.go:334] "Generic (PLEG): container finished" podID="65e28264-0563-4ebc-9d23-5917d83b5a26" containerID="09d4283ac98baa32196fe65877ab83bbc29a89f1698402582dc5d0a4df330416" exitCode=0 Feb 27 19:33:47 crc kubenswrapper[4981]: I0227 19:33:47.861089 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-466cr" event={"ID":"65e28264-0563-4ebc-9d23-5917d83b5a26","Type":"ContainerDied","Data":"09d4283ac98baa32196fe65877ab83bbc29a89f1698402582dc5d0a4df330416"} Feb 27 19:33:48 crc kubenswrapper[4981]: I0227 19:33:48.870183 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-466cr" event={"ID":"65e28264-0563-4ebc-9d23-5917d83b5a26","Type":"ContainerStarted","Data":"bcbbcb994f3d0cd44381c16236d2dea7436e16f3c8ef1eb5b007e36df9df5a6d"} Feb 27 19:33:48 crc kubenswrapper[4981]: I0227 19:33:48.899882 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-466cr" podStartSLOduration=2.446821888 podStartE2EDuration="17.899860156s" podCreationTimestamp="2026-02-27 19:33:31 +0000 UTC" firstStartedPulling="2026-02-27 19:33:32.772335021 +0000 UTC m=+2912.251116181" lastFinishedPulling="2026-02-27 19:33:48.225373279 +0000 UTC m=+2927.704154449" observedRunningTime="2026-02-27 19:33:48.888687247 +0000 UTC m=+2928.367468407" watchObservedRunningTime="2026-02-27 19:33:48.899860156 +0000 UTC m=+2928.378641316" Feb 27 19:33:51 crc kubenswrapper[4981]: I0227 19:33:51.888227 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-466cr" Feb 27 19:33:51 crc kubenswrapper[4981]: I0227 19:33:51.888288 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-466cr" Feb 27 19:33:51 crc kubenswrapper[4981]: I0227 19:33:51.929415 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-466cr" Feb 27 19:34:00 crc kubenswrapper[4981]: I0227 19:34:00.135798 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537014-wh9vm"] Feb 27 19:34:00 crc kubenswrapper[4981]: I0227 19:34:00.137230 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537014-wh9vm" Feb 27 19:34:00 crc kubenswrapper[4981]: I0227 19:34:00.139536 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:34:00 crc kubenswrapper[4981]: I0227 19:34:00.140000 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:34:00 crc kubenswrapper[4981]: I0227 19:34:00.143181 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:34:00 crc kubenswrapper[4981]: I0227 19:34:00.154488 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537014-wh9vm"] Feb 27 19:34:00 crc kubenswrapper[4981]: I0227 19:34:00.237712 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f27j9\" (UniqueName: \"kubernetes.io/projected/49394faf-2559-4c97-9601-df2b3c500a1e-kube-api-access-f27j9\") pod \"auto-csr-approver-29537014-wh9vm\" (UID: \"49394faf-2559-4c97-9601-df2b3c500a1e\") " pod="openshift-infra/auto-csr-approver-29537014-wh9vm" Feb 27 19:34:00 crc kubenswrapper[4981]: I0227 19:34:00.339350 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f27j9\" (UniqueName: \"kubernetes.io/projected/49394faf-2559-4c97-9601-df2b3c500a1e-kube-api-access-f27j9\") pod \"auto-csr-approver-29537014-wh9vm\" (UID: \"49394faf-2559-4c97-9601-df2b3c500a1e\") " pod="openshift-infra/auto-csr-approver-29537014-wh9vm" Feb 27 19:34:00 crc kubenswrapper[4981]: I0227 19:34:00.358327 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f27j9\" (UniqueName: \"kubernetes.io/projected/49394faf-2559-4c97-9601-df2b3c500a1e-kube-api-access-f27j9\") pod \"auto-csr-approver-29537014-wh9vm\" (UID: \"49394faf-2559-4c97-9601-df2b3c500a1e\") " pod="openshift-infra/auto-csr-approver-29537014-wh9vm" Feb 27 19:34:00 crc kubenswrapper[4981]: I0227 19:34:00.453352 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537014-wh9vm" Feb 27 19:34:00 crc kubenswrapper[4981]: I0227 19:34:00.843449 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537014-wh9vm"] Feb 27 19:34:00 crc kubenswrapper[4981]: I0227 19:34:00.952421 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537014-wh9vm" event={"ID":"49394faf-2559-4c97-9601-df2b3c500a1e","Type":"ContainerStarted","Data":"1484fe8f92fb6d658c4838a623f09b277aa1532b9f2a52cb25b308ccf5c153cc"} Feb 27 19:34:01 crc kubenswrapper[4981]: I0227 19:34:01.931616 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-466cr" Feb 27 19:34:01 crc kubenswrapper[4981]: I0227 19:34:01.966674 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537014-wh9vm" event={"ID":"49394faf-2559-4c97-9601-df2b3c500a1e","Type":"ContainerStarted","Data":"f82a49db1780a2eaaf253e0c71e3c58df790606b7b497dd9fc1ed095afc09a59"} Feb 27 19:34:01 crc kubenswrapper[4981]: I0227 19:34:01.980445 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537014-wh9vm" podStartSLOduration=1.194079386 podStartE2EDuration="1.980421238s" podCreationTimestamp="2026-02-27 19:34:00 +0000 UTC" firstStartedPulling="2026-02-27 19:34:00.860188463 +0000 UTC m=+2940.338969623" lastFinishedPulling="2026-02-27 19:34:01.646530305 +0000 UTC m=+2941.125311475" observedRunningTime="2026-02-27 19:34:01.97823407 +0000 UTC m=+2941.457015240" watchObservedRunningTime="2026-02-27 19:34:01.980421238 +0000 UTC m=+2941.459202398" Feb 27 19:34:01 crc kubenswrapper[4981]: I0227 19:34:01.992437 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-466cr"] Feb 27 19:34:01 crc kubenswrapper[4981]: I0227 19:34:01.992751 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-466cr" podUID="65e28264-0563-4ebc-9d23-5917d83b5a26" containerName="registry-server" containerID="cri-o://bcbbcb994f3d0cd44381c16236d2dea7436e16f3c8ef1eb5b007e36df9df5a6d" gracePeriod=2 Feb 27 19:34:02 crc kubenswrapper[4981]: I0227 19:34:02.359795 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-466cr" Feb 27 19:34:02 crc kubenswrapper[4981]: I0227 19:34:02.466836 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e28264-0563-4ebc-9d23-5917d83b5a26-utilities\") pod \"65e28264-0563-4ebc-9d23-5917d83b5a26\" (UID: \"65e28264-0563-4ebc-9d23-5917d83b5a26\") " Feb 27 19:34:02 crc kubenswrapper[4981]: I0227 19:34:02.467024 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e28264-0563-4ebc-9d23-5917d83b5a26-catalog-content\") pod \"65e28264-0563-4ebc-9d23-5917d83b5a26\" (UID: \"65e28264-0563-4ebc-9d23-5917d83b5a26\") " Feb 27 19:34:02 crc kubenswrapper[4981]: I0227 19:34:02.467365 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wxkl\" (UniqueName: \"kubernetes.io/projected/65e28264-0563-4ebc-9d23-5917d83b5a26-kube-api-access-6wxkl\") pod \"65e28264-0563-4ebc-9d23-5917d83b5a26\" (UID: \"65e28264-0563-4ebc-9d23-5917d83b5a26\") " Feb 27 19:34:02 crc kubenswrapper[4981]: I0227 19:34:02.467999 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65e28264-0563-4ebc-9d23-5917d83b5a26-utilities" (OuterVolumeSpecName: "utilities") pod "65e28264-0563-4ebc-9d23-5917d83b5a26" (UID: "65e28264-0563-4ebc-9d23-5917d83b5a26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:34:02 crc kubenswrapper[4981]: I0227 19:34:02.473413 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65e28264-0563-4ebc-9d23-5917d83b5a26-kube-api-access-6wxkl" (OuterVolumeSpecName: "kube-api-access-6wxkl") pod "65e28264-0563-4ebc-9d23-5917d83b5a26" (UID: "65e28264-0563-4ebc-9d23-5917d83b5a26"). InnerVolumeSpecName "kube-api-access-6wxkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:34:02 crc kubenswrapper[4981]: I0227 19:34:02.496720 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65e28264-0563-4ebc-9d23-5917d83b5a26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65e28264-0563-4ebc-9d23-5917d83b5a26" (UID: "65e28264-0563-4ebc-9d23-5917d83b5a26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:34:02 crc kubenswrapper[4981]: I0227 19:34:02.569232 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wxkl\" (UniqueName: \"kubernetes.io/projected/65e28264-0563-4ebc-9d23-5917d83b5a26-kube-api-access-6wxkl\") on node \"crc\" DevicePath \"\"" Feb 27 19:34:02 crc kubenswrapper[4981]: I0227 19:34:02.569262 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65e28264-0563-4ebc-9d23-5917d83b5a26-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:34:02 crc kubenswrapper[4981]: I0227 19:34:02.569272 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65e28264-0563-4ebc-9d23-5917d83b5a26-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:34:02 crc kubenswrapper[4981]: I0227 19:34:02.974665 4981 generic.go:334] "Generic (PLEG): container finished" podID="49394faf-2559-4c97-9601-df2b3c500a1e" containerID="f82a49db1780a2eaaf253e0c71e3c58df790606b7b497dd9fc1ed095afc09a59" exitCode=0 Feb 27 19:34:02 crc kubenswrapper[4981]: I0227 19:34:02.974707 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537014-wh9vm" event={"ID":"49394faf-2559-4c97-9601-df2b3c500a1e","Type":"ContainerDied","Data":"f82a49db1780a2eaaf253e0c71e3c58df790606b7b497dd9fc1ed095afc09a59"} Feb 27 19:34:02 crc kubenswrapper[4981]: I0227 19:34:02.977414 4981 generic.go:334] "Generic (PLEG): container finished" podID="65e28264-0563-4ebc-9d23-5917d83b5a26" containerID="bcbbcb994f3d0cd44381c16236d2dea7436e16f3c8ef1eb5b007e36df9df5a6d" exitCode=0 Feb 27 19:34:02 crc kubenswrapper[4981]: I0227 19:34:02.977456 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-466cr" event={"ID":"65e28264-0563-4ebc-9d23-5917d83b5a26","Type":"ContainerDied","Data":"bcbbcb994f3d0cd44381c16236d2dea7436e16f3c8ef1eb5b007e36df9df5a6d"} Feb 27 19:34:02 crc kubenswrapper[4981]: I0227 19:34:02.977465 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-466cr" Feb 27 19:34:02 crc kubenswrapper[4981]: I0227 19:34:02.977484 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-466cr" event={"ID":"65e28264-0563-4ebc-9d23-5917d83b5a26","Type":"ContainerDied","Data":"e035a0fb3eb0649affd191b7fa3d7e6ff446838cb7ba3f0c3dbff6c7932edc44"} Feb 27 19:34:02 crc kubenswrapper[4981]: I0227 19:34:02.977508 4981 scope.go:117] "RemoveContainer" containerID="bcbbcb994f3d0cd44381c16236d2dea7436e16f3c8ef1eb5b007e36df9df5a6d" Feb 27 19:34:02 crc kubenswrapper[4981]: I0227 19:34:02.995609 4981 scope.go:117] "RemoveContainer" containerID="09d4283ac98baa32196fe65877ab83bbc29a89f1698402582dc5d0a4df330416" Feb 27 19:34:03 crc kubenswrapper[4981]: I0227 19:34:03.008404 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-466cr"] Feb 27 19:34:03 crc kubenswrapper[4981]: I0227 19:34:03.012945 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-466cr"] Feb 27 19:34:03 crc kubenswrapper[4981]: I0227 19:34:03.032370 4981 scope.go:117] "RemoveContainer" containerID="996e8c5321ed85cabe45353fa2b33ca818c39eb6f8d8edff6d8195b10aba2db8" Feb 27 19:34:03 crc kubenswrapper[4981]: I0227 19:34:03.046594 4981 scope.go:117] "RemoveContainer" containerID="bcbbcb994f3d0cd44381c16236d2dea7436e16f3c8ef1eb5b007e36df9df5a6d" Feb 27 19:34:03 crc kubenswrapper[4981]: E0227 19:34:03.047126 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcbbcb994f3d0cd44381c16236d2dea7436e16f3c8ef1eb5b007e36df9df5a6d\": container with ID starting with bcbbcb994f3d0cd44381c16236d2dea7436e16f3c8ef1eb5b007e36df9df5a6d not found: ID does not exist" containerID="bcbbcb994f3d0cd44381c16236d2dea7436e16f3c8ef1eb5b007e36df9df5a6d" Feb 27 19:34:03 crc kubenswrapper[4981]: I0227 19:34:03.047164 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcbbcb994f3d0cd44381c16236d2dea7436e16f3c8ef1eb5b007e36df9df5a6d"} err="failed to get container status \"bcbbcb994f3d0cd44381c16236d2dea7436e16f3c8ef1eb5b007e36df9df5a6d\": rpc error: code = NotFound desc = could not find container \"bcbbcb994f3d0cd44381c16236d2dea7436e16f3c8ef1eb5b007e36df9df5a6d\": container with ID starting with bcbbcb994f3d0cd44381c16236d2dea7436e16f3c8ef1eb5b007e36df9df5a6d not found: ID does not exist" Feb 27 19:34:03 crc kubenswrapper[4981]: I0227 19:34:03.047188 4981 scope.go:117] "RemoveContainer" containerID="09d4283ac98baa32196fe65877ab83bbc29a89f1698402582dc5d0a4df330416" Feb 27 19:34:03 crc kubenswrapper[4981]: E0227 19:34:03.047479 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09d4283ac98baa32196fe65877ab83bbc29a89f1698402582dc5d0a4df330416\": container with ID starting with 09d4283ac98baa32196fe65877ab83bbc29a89f1698402582dc5d0a4df330416 not found: ID does not exist" containerID="09d4283ac98baa32196fe65877ab83bbc29a89f1698402582dc5d0a4df330416" Feb 27 19:34:03 crc kubenswrapper[4981]: I0227 19:34:03.047502 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09d4283ac98baa32196fe65877ab83bbc29a89f1698402582dc5d0a4df330416"} err="failed to get container status \"09d4283ac98baa32196fe65877ab83bbc29a89f1698402582dc5d0a4df330416\": rpc error: code = NotFound desc = could not find container \"09d4283ac98baa32196fe65877ab83bbc29a89f1698402582dc5d0a4df330416\": container with ID starting with 09d4283ac98baa32196fe65877ab83bbc29a89f1698402582dc5d0a4df330416 not found: ID does not exist" Feb 27 19:34:03 crc kubenswrapper[4981]: I0227 19:34:03.047521 4981 scope.go:117] "RemoveContainer" containerID="996e8c5321ed85cabe45353fa2b33ca818c39eb6f8d8edff6d8195b10aba2db8" Feb 27 19:34:03 crc kubenswrapper[4981]: E0227 19:34:03.047900 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"996e8c5321ed85cabe45353fa2b33ca818c39eb6f8d8edff6d8195b10aba2db8\": container with ID starting with 996e8c5321ed85cabe45353fa2b33ca818c39eb6f8d8edff6d8195b10aba2db8 not found: ID does not exist" containerID="996e8c5321ed85cabe45353fa2b33ca818c39eb6f8d8edff6d8195b10aba2db8" Feb 27 19:34:03 crc kubenswrapper[4981]: I0227 19:34:03.047943 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"996e8c5321ed85cabe45353fa2b33ca818c39eb6f8d8edff6d8195b10aba2db8"} err="failed to get container status \"996e8c5321ed85cabe45353fa2b33ca818c39eb6f8d8edff6d8195b10aba2db8\": rpc error: code = NotFound desc = could not find container \"996e8c5321ed85cabe45353fa2b33ca818c39eb6f8d8edff6d8195b10aba2db8\": container with ID starting with 996e8c5321ed85cabe45353fa2b33ca818c39eb6f8d8edff6d8195b10aba2db8 not found: ID does not exist" Feb 27 19:34:03 crc kubenswrapper[4981]: I0227 19:34:03.637282 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65e28264-0563-4ebc-9d23-5917d83b5a26" path="/var/lib/kubelet/pods/65e28264-0563-4ebc-9d23-5917d83b5a26/volumes" Feb 27 19:34:04 crc kubenswrapper[4981]: I0227 19:34:04.250589 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537014-wh9vm" Feb 27 19:34:04 crc kubenswrapper[4981]: I0227 19:34:04.390349 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f27j9\" (UniqueName: \"kubernetes.io/projected/49394faf-2559-4c97-9601-df2b3c500a1e-kube-api-access-f27j9\") pod \"49394faf-2559-4c97-9601-df2b3c500a1e\" (UID: \"49394faf-2559-4c97-9601-df2b3c500a1e\") " Feb 27 19:34:04 crc kubenswrapper[4981]: I0227 19:34:04.394478 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49394faf-2559-4c97-9601-df2b3c500a1e-kube-api-access-f27j9" (OuterVolumeSpecName: "kube-api-access-f27j9") pod "49394faf-2559-4c97-9601-df2b3c500a1e" (UID: "49394faf-2559-4c97-9601-df2b3c500a1e"). InnerVolumeSpecName "kube-api-access-f27j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:34:04 crc kubenswrapper[4981]: I0227 19:34:04.492138 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f27j9\" (UniqueName: \"kubernetes.io/projected/49394faf-2559-4c97-9601-df2b3c500a1e-kube-api-access-f27j9\") on node \"crc\" DevicePath \"\"" Feb 27 19:34:04 crc kubenswrapper[4981]: I0227 19:34:04.706195 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537008-s49nx"] Feb 27 19:34:04 crc kubenswrapper[4981]: I0227 19:34:04.711070 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537008-s49nx"] Feb 27 19:34:04 crc kubenswrapper[4981]: I0227 19:34:04.999738 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537014-wh9vm" event={"ID":"49394faf-2559-4c97-9601-df2b3c500a1e","Type":"ContainerDied","Data":"1484fe8f92fb6d658c4838a623f09b277aa1532b9f2a52cb25b308ccf5c153cc"} Feb 27 19:34:04 crc kubenswrapper[4981]: I0227 19:34:04.999790 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1484fe8f92fb6d658c4838a623f09b277aa1532b9f2a52cb25b308ccf5c153cc" Feb 27 19:34:04 crc kubenswrapper[4981]: I0227 19:34:04.999858 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537014-wh9vm" Feb 27 19:34:05 crc kubenswrapper[4981]: I0227 19:34:05.636646 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdbe29ac-7373-4cf0-868e-12dbd41fdc67" path="/var/lib/kubelet/pods/fdbe29ac-7373-4cf0-868e-12dbd41fdc67/volumes" Feb 27 19:34:20 crc kubenswrapper[4981]: I0227 19:34:20.248973 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:34:20 crc kubenswrapper[4981]: I0227 19:34:20.249782 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:34:23 crc kubenswrapper[4981]: I0227 19:34:23.286848 4981 scope.go:117] "RemoveContainer" containerID="d49458e56e5ecb2ecd9ebebfc078c66e25c5afc861f8380c58ba525ad31b98ae" Feb 27 19:34:50 crc kubenswrapper[4981]: I0227 19:34:50.248381 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:34:50 crc kubenswrapper[4981]: I0227 19:34:50.248880 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:35:20 crc kubenswrapper[4981]: I0227 19:35:20.249515 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:35:20 crc kubenswrapper[4981]: I0227 19:35:20.250074 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:35:20 crc kubenswrapper[4981]: I0227 19:35:20.250120 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 19:35:20 crc kubenswrapper[4981]: I0227 19:35:20.250702 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78"} pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:35:20 crc kubenswrapper[4981]: I0227 19:35:20.250747 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" containerID="cri-o://ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" gracePeriod=600 Feb 27 19:35:20 crc kubenswrapper[4981]: E0227 19:35:20.419276 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:35:20 crc kubenswrapper[4981]: I0227 19:35:20.478752 4981 generic.go:334] "Generic (PLEG): container finished" podID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" exitCode=0 Feb 27 19:35:20 crc kubenswrapper[4981]: I0227 19:35:20.478802 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerDied","Data":"ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78"} Feb 27 19:35:20 crc kubenswrapper[4981]: I0227 19:35:20.478845 4981 scope.go:117] "RemoveContainer" containerID="b01185e35f631fc4c627efbb37de556efc5840a81f915a452c6f124e1d4cf905" Feb 27 19:35:20 crc kubenswrapper[4981]: I0227 19:35:20.480084 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:35:20 crc kubenswrapper[4981]: E0227 19:35:20.480458 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:35:32 crc kubenswrapper[4981]: I0227 19:35:32.628994 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:35:32 crc kubenswrapper[4981]: E0227 19:35:32.630448 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:35:46 crc kubenswrapper[4981]: I0227 19:35:46.629013 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:35:46 crc kubenswrapper[4981]: E0227 19:35:46.629704 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:35:57 crc kubenswrapper[4981]: I0227 19:35:57.628619 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:35:57 crc kubenswrapper[4981]: E0227 19:35:57.629385 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:36:00 crc kubenswrapper[4981]: I0227 19:36:00.146013 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537016-5jcs8"] Feb 27 19:36:00 crc kubenswrapper[4981]: E0227 19:36:00.146699 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49394faf-2559-4c97-9601-df2b3c500a1e" containerName="oc" Feb 27 19:36:00 crc kubenswrapper[4981]: I0227 19:36:00.146713 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="49394faf-2559-4c97-9601-df2b3c500a1e" containerName="oc" Feb 27 19:36:00 crc kubenswrapper[4981]: E0227 19:36:00.146741 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65e28264-0563-4ebc-9d23-5917d83b5a26" containerName="registry-server" Feb 27 19:36:00 crc kubenswrapper[4981]: I0227 19:36:00.146747 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e28264-0563-4ebc-9d23-5917d83b5a26" containerName="registry-server" Feb 27 19:36:00 crc kubenswrapper[4981]: E0227 19:36:00.146757 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65e28264-0563-4ebc-9d23-5917d83b5a26" containerName="extract-utilities" Feb 27 19:36:00 crc kubenswrapper[4981]: I0227 19:36:00.146764 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e28264-0563-4ebc-9d23-5917d83b5a26" containerName="extract-utilities" Feb 27 19:36:00 crc kubenswrapper[4981]: E0227 19:36:00.146779 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65e28264-0563-4ebc-9d23-5917d83b5a26" containerName="extract-content" Feb 27 19:36:00 crc kubenswrapper[4981]: I0227 19:36:00.146785 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="65e28264-0563-4ebc-9d23-5917d83b5a26" containerName="extract-content" Feb 27 19:36:00 crc kubenswrapper[4981]: I0227 19:36:00.146905 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="65e28264-0563-4ebc-9d23-5917d83b5a26" containerName="registry-server" Feb 27 19:36:00 crc kubenswrapper[4981]: I0227 19:36:00.146929 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="49394faf-2559-4c97-9601-df2b3c500a1e" containerName="oc" Feb 27 19:36:00 crc kubenswrapper[4981]: I0227 19:36:00.147448 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537016-5jcs8" Feb 27 19:36:00 crc kubenswrapper[4981]: I0227 19:36:00.150366 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:36:00 crc kubenswrapper[4981]: I0227 19:36:00.152723 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:36:00 crc kubenswrapper[4981]: I0227 19:36:00.152754 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:36:00 crc kubenswrapper[4981]: I0227 19:36:00.163482 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537016-5jcs8"] Feb 27 19:36:00 crc kubenswrapper[4981]: I0227 19:36:00.293838 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddbz7\" (UniqueName: \"kubernetes.io/projected/2e5cea75-9d0c-4908-9e15-e19cd3a1b925-kube-api-access-ddbz7\") pod \"auto-csr-approver-29537016-5jcs8\" (UID: \"2e5cea75-9d0c-4908-9e15-e19cd3a1b925\") " pod="openshift-infra/auto-csr-approver-29537016-5jcs8" Feb 27 19:36:00 crc kubenswrapper[4981]: I0227 19:36:00.394637 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddbz7\" (UniqueName: \"kubernetes.io/projected/2e5cea75-9d0c-4908-9e15-e19cd3a1b925-kube-api-access-ddbz7\") pod \"auto-csr-approver-29537016-5jcs8\" (UID: \"2e5cea75-9d0c-4908-9e15-e19cd3a1b925\") " pod="openshift-infra/auto-csr-approver-29537016-5jcs8" Feb 27 19:36:00 crc kubenswrapper[4981]: I0227 19:36:00.417843 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddbz7\" (UniqueName: \"kubernetes.io/projected/2e5cea75-9d0c-4908-9e15-e19cd3a1b925-kube-api-access-ddbz7\") pod \"auto-csr-approver-29537016-5jcs8\" (UID: \"2e5cea75-9d0c-4908-9e15-e19cd3a1b925\") " pod="openshift-infra/auto-csr-approver-29537016-5jcs8" Feb 27 19:36:00 crc kubenswrapper[4981]: I0227 19:36:00.463912 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537016-5jcs8" Feb 27 19:36:00 crc kubenswrapper[4981]: I0227 19:36:00.882907 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537016-5jcs8"] Feb 27 19:36:01 crc kubenswrapper[4981]: I0227 19:36:01.743467 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537016-5jcs8" event={"ID":"2e5cea75-9d0c-4908-9e15-e19cd3a1b925","Type":"ContainerStarted","Data":"8b985871cc86ef2b17fa86016cb274b2ff500bdda3069bfa101b89db849430d5"} Feb 27 19:36:01 crc kubenswrapper[4981]: E0227 19:36:01.847398 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:36:01 crc kubenswrapper[4981]: E0227 19:36:01.847527 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:36:01 crc kubenswrapper[4981]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:36:01 crc kubenswrapper[4981]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ddbz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537016-5jcs8_openshift-infra(2e5cea75-9d0c-4908-9e15-e19cd3a1b925): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:36:01 crc kubenswrapper[4981]: > logger="UnhandledError" Feb 27 19:36:01 crc kubenswrapper[4981]: E0227 19:36:01.849497 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537016-5jcs8" podUID="2e5cea75-9d0c-4908-9e15-e19cd3a1b925" Feb 27 19:36:02 crc kubenswrapper[4981]: E0227 19:36:02.752155 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537016-5jcs8" podUID="2e5cea75-9d0c-4908-9e15-e19cd3a1b925" Feb 27 19:36:12 crc kubenswrapper[4981]: I0227 19:36:12.628636 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:36:12 crc kubenswrapper[4981]: E0227 19:36:12.629338 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:36:13 crc kubenswrapper[4981]: I0227 19:36:13.376653 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b7dwm"] Feb 27 19:36:13 crc kubenswrapper[4981]: I0227 19:36:13.378425 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b7dwm" Feb 27 19:36:13 crc kubenswrapper[4981]: I0227 19:36:13.392536 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b7dwm"] Feb 27 19:36:13 crc kubenswrapper[4981]: I0227 19:36:13.492266 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skhgc\" (UniqueName: \"kubernetes.io/projected/0c715187-1c04-46a5-9cd9-a68c82ef477d-kube-api-access-skhgc\") pod \"certified-operators-b7dwm\" (UID: \"0c715187-1c04-46a5-9cd9-a68c82ef477d\") " pod="openshift-marketplace/certified-operators-b7dwm" Feb 27 19:36:13 crc kubenswrapper[4981]: I0227 19:36:13.492577 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c715187-1c04-46a5-9cd9-a68c82ef477d-utilities\") pod \"certified-operators-b7dwm\" (UID: \"0c715187-1c04-46a5-9cd9-a68c82ef477d\") " pod="openshift-marketplace/certified-operators-b7dwm" Feb 27 19:36:13 crc kubenswrapper[4981]: I0227 19:36:13.492609 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c715187-1c04-46a5-9cd9-a68c82ef477d-catalog-content\") pod \"certified-operators-b7dwm\" (UID: \"0c715187-1c04-46a5-9cd9-a68c82ef477d\") " pod="openshift-marketplace/certified-operators-b7dwm" Feb 27 19:36:13 crc kubenswrapper[4981]: I0227 19:36:13.593737 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c715187-1c04-46a5-9cd9-a68c82ef477d-catalog-content\") pod \"certified-operators-b7dwm\" (UID: \"0c715187-1c04-46a5-9cd9-a68c82ef477d\") " pod="openshift-marketplace/certified-operators-b7dwm" Feb 27 19:36:13 crc kubenswrapper[4981]: I0227 19:36:13.593914 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skhgc\" (UniqueName: \"kubernetes.io/projected/0c715187-1c04-46a5-9cd9-a68c82ef477d-kube-api-access-skhgc\") pod \"certified-operators-b7dwm\" (UID: \"0c715187-1c04-46a5-9cd9-a68c82ef477d\") " pod="openshift-marketplace/certified-operators-b7dwm" Feb 27 19:36:13 crc kubenswrapper[4981]: I0227 19:36:13.593936 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c715187-1c04-46a5-9cd9-a68c82ef477d-utilities\") pod \"certified-operators-b7dwm\" (UID: \"0c715187-1c04-46a5-9cd9-a68c82ef477d\") " pod="openshift-marketplace/certified-operators-b7dwm" Feb 27 19:36:13 crc kubenswrapper[4981]: I0227 19:36:13.594246 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c715187-1c04-46a5-9cd9-a68c82ef477d-catalog-content\") pod \"certified-operators-b7dwm\" (UID: \"0c715187-1c04-46a5-9cd9-a68c82ef477d\") " pod="openshift-marketplace/certified-operators-b7dwm" Feb 27 19:36:13 crc kubenswrapper[4981]: I0227 19:36:13.594324 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c715187-1c04-46a5-9cd9-a68c82ef477d-utilities\") pod \"certified-operators-b7dwm\" (UID: \"0c715187-1c04-46a5-9cd9-a68c82ef477d\") " pod="openshift-marketplace/certified-operators-b7dwm" Feb 27 19:36:13 crc kubenswrapper[4981]: I0227 19:36:13.615448 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skhgc\" (UniqueName: \"kubernetes.io/projected/0c715187-1c04-46a5-9cd9-a68c82ef477d-kube-api-access-skhgc\") pod \"certified-operators-b7dwm\" (UID: \"0c715187-1c04-46a5-9cd9-a68c82ef477d\") " pod="openshift-marketplace/certified-operators-b7dwm" Feb 27 19:36:13 crc kubenswrapper[4981]: I0227 19:36:13.698616 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b7dwm" Feb 27 19:36:14 crc kubenswrapper[4981]: I0227 19:36:14.048118 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b7dwm"] Feb 27 19:36:14 crc kubenswrapper[4981]: I0227 19:36:14.830169 4981 generic.go:334] "Generic (PLEG): container finished" podID="0c715187-1c04-46a5-9cd9-a68c82ef477d" containerID="2d8e67414e0363605539f0e35439d03fba7a5232af60344c39a2db8b5b9c2279" exitCode=0 Feb 27 19:36:14 crc kubenswrapper[4981]: I0227 19:36:14.830213 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7dwm" event={"ID":"0c715187-1c04-46a5-9cd9-a68c82ef477d","Type":"ContainerDied","Data":"2d8e67414e0363605539f0e35439d03fba7a5232af60344c39a2db8b5b9c2279"} Feb 27 19:36:14 crc kubenswrapper[4981]: I0227 19:36:14.830267 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7dwm" event={"ID":"0c715187-1c04-46a5-9cd9-a68c82ef477d","Type":"ContainerStarted","Data":"eee6a1c2fb538ffd5baa070432e505efed90b05ae0f568e7b0fcca63c3a2efc9"} Feb 27 19:36:15 crc kubenswrapper[4981]: I0227 19:36:15.838219 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7dwm" event={"ID":"0c715187-1c04-46a5-9cd9-a68c82ef477d","Type":"ContainerStarted","Data":"6300b531b23f70ed9fedd8de81c5cb865268ba2b7b8460201cb43719cec7360a"} Feb 27 19:36:16 crc kubenswrapper[4981]: I0227 19:36:16.845839 4981 generic.go:334] "Generic (PLEG): container finished" podID="0c715187-1c04-46a5-9cd9-a68c82ef477d" containerID="6300b531b23f70ed9fedd8de81c5cb865268ba2b7b8460201cb43719cec7360a" exitCode=0 Feb 27 19:36:16 crc kubenswrapper[4981]: I0227 19:36:16.845898 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7dwm" event={"ID":"0c715187-1c04-46a5-9cd9-a68c82ef477d","Type":"ContainerDied","Data":"6300b531b23f70ed9fedd8de81c5cb865268ba2b7b8460201cb43719cec7360a"} Feb 27 19:36:17 crc kubenswrapper[4981]: I0227 19:36:17.852867 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7dwm" event={"ID":"0c715187-1c04-46a5-9cd9-a68c82ef477d","Type":"ContainerStarted","Data":"5f814a219b262fa72f43839cbea9da6ea0b37f6c8d07549ccd9ffa331252249a"} Feb 27 19:36:17 crc kubenswrapper[4981]: I0227 19:36:17.854123 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537016-5jcs8" event={"ID":"2e5cea75-9d0c-4908-9e15-e19cd3a1b925","Type":"ContainerStarted","Data":"d41c141f991aab917bae90260131f1a15cfe5240d21137814ca2c7d35650c59e"} Feb 27 19:36:17 crc kubenswrapper[4981]: I0227 19:36:17.877700 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b7dwm" podStartSLOduration=2.395001622 podStartE2EDuration="4.877681958s" podCreationTimestamp="2026-02-27 19:36:13 +0000 UTC" firstStartedPulling="2026-02-27 19:36:14.831553556 +0000 UTC m=+3074.310334716" lastFinishedPulling="2026-02-27 19:36:17.314233892 +0000 UTC m=+3076.793015052" observedRunningTime="2026-02-27 19:36:17.872976792 +0000 UTC m=+3077.351757952" watchObservedRunningTime="2026-02-27 19:36:17.877681958 +0000 UTC m=+3077.356463118" Feb 27 19:36:17 crc kubenswrapper[4981]: I0227 19:36:17.889562 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537016-5jcs8" podStartSLOduration=1.222854959 podStartE2EDuration="17.889544518s" podCreationTimestamp="2026-02-27 19:36:00 +0000 UTC" firstStartedPulling="2026-02-27 19:36:00.888480035 +0000 UTC m=+3060.367261195" lastFinishedPulling="2026-02-27 19:36:17.555169594 +0000 UTC m=+3077.033950754" observedRunningTime="2026-02-27 19:36:17.888810275 +0000 UTC m=+3077.367591445" watchObservedRunningTime="2026-02-27 19:36:17.889544518 +0000 UTC m=+3077.368325678" Feb 27 19:36:18 crc kubenswrapper[4981]: I0227 19:36:18.862546 4981 generic.go:334] "Generic (PLEG): container finished" podID="2e5cea75-9d0c-4908-9e15-e19cd3a1b925" containerID="d41c141f991aab917bae90260131f1a15cfe5240d21137814ca2c7d35650c59e" exitCode=0 Feb 27 19:36:18 crc kubenswrapper[4981]: I0227 19:36:18.862630 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537016-5jcs8" event={"ID":"2e5cea75-9d0c-4908-9e15-e19cd3a1b925","Type":"ContainerDied","Data":"d41c141f991aab917bae90260131f1a15cfe5240d21137814ca2c7d35650c59e"} Feb 27 19:36:20 crc kubenswrapper[4981]: I0227 19:36:20.124007 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537016-5jcs8" Feb 27 19:36:20 crc kubenswrapper[4981]: I0227 19:36:20.189994 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddbz7\" (UniqueName: \"kubernetes.io/projected/2e5cea75-9d0c-4908-9e15-e19cd3a1b925-kube-api-access-ddbz7\") pod \"2e5cea75-9d0c-4908-9e15-e19cd3a1b925\" (UID: \"2e5cea75-9d0c-4908-9e15-e19cd3a1b925\") " Feb 27 19:36:20 crc kubenswrapper[4981]: I0227 19:36:20.195236 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e5cea75-9d0c-4908-9e15-e19cd3a1b925-kube-api-access-ddbz7" (OuterVolumeSpecName: "kube-api-access-ddbz7") pod "2e5cea75-9d0c-4908-9e15-e19cd3a1b925" (UID: "2e5cea75-9d0c-4908-9e15-e19cd3a1b925"). InnerVolumeSpecName "kube-api-access-ddbz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:36:20 crc kubenswrapper[4981]: I0227 19:36:20.291485 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddbz7\" (UniqueName: \"kubernetes.io/projected/2e5cea75-9d0c-4908-9e15-e19cd3a1b925-kube-api-access-ddbz7\") on node \"crc\" DevicePath \"\"" Feb 27 19:36:21 crc kubenswrapper[4981]: I0227 19:36:21.082169 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537016-5jcs8" event={"ID":"2e5cea75-9d0c-4908-9e15-e19cd3a1b925","Type":"ContainerDied","Data":"8b985871cc86ef2b17fa86016cb274b2ff500bdda3069bfa101b89db849430d5"} Feb 27 19:36:21 crc kubenswrapper[4981]: I0227 19:36:21.082214 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b985871cc86ef2b17fa86016cb274b2ff500bdda3069bfa101b89db849430d5" Feb 27 19:36:21 crc kubenswrapper[4981]: I0227 19:36:21.082259 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537016-5jcs8" Feb 27 19:36:21 crc kubenswrapper[4981]: I0227 19:36:21.191530 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537010-p6w2v"] Feb 27 19:36:21 crc kubenswrapper[4981]: I0227 19:36:21.196228 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537010-p6w2v"] Feb 27 19:36:21 crc kubenswrapper[4981]: I0227 19:36:21.636009 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11d412f6-bf30-4199-be58-e966a4b3da09" path="/var/lib/kubelet/pods/11d412f6-bf30-4199-be58-e966a4b3da09/volumes" Feb 27 19:36:23 crc kubenswrapper[4981]: I0227 19:36:23.359833 4981 scope.go:117] "RemoveContainer" containerID="b63471a0bba7b8bc797a1282f73adb9991ae538beeeb89d2e40887c091459d43" Feb 27 19:36:23 crc kubenswrapper[4981]: I0227 19:36:23.699632 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b7dwm" Feb 27 19:36:23 crc kubenswrapper[4981]: I0227 19:36:23.699694 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b7dwm" Feb 27 19:36:23 crc kubenswrapper[4981]: I0227 19:36:23.736968 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b7dwm" Feb 27 19:36:24 crc kubenswrapper[4981]: I0227 19:36:24.140762 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b7dwm" Feb 27 19:36:24 crc kubenswrapper[4981]: I0227 19:36:24.184689 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b7dwm"] Feb 27 19:36:25 crc kubenswrapper[4981]: I0227 19:36:25.629887 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:36:25 crc kubenswrapper[4981]: E0227 19:36:25.630229 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:36:26 crc kubenswrapper[4981]: I0227 19:36:26.121389 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b7dwm" podUID="0c715187-1c04-46a5-9cd9-a68c82ef477d" containerName="registry-server" containerID="cri-o://5f814a219b262fa72f43839cbea9da6ea0b37f6c8d07549ccd9ffa331252249a" gracePeriod=2 Feb 27 19:36:26 crc kubenswrapper[4981]: I0227 19:36:26.548345 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b7dwm" Feb 27 19:36:26 crc kubenswrapper[4981]: I0227 19:36:26.747403 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c715187-1c04-46a5-9cd9-a68c82ef477d-catalog-content\") pod \"0c715187-1c04-46a5-9cd9-a68c82ef477d\" (UID: \"0c715187-1c04-46a5-9cd9-a68c82ef477d\") " Feb 27 19:36:26 crc kubenswrapper[4981]: I0227 19:36:26.747493 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skhgc\" (UniqueName: \"kubernetes.io/projected/0c715187-1c04-46a5-9cd9-a68c82ef477d-kube-api-access-skhgc\") pod \"0c715187-1c04-46a5-9cd9-a68c82ef477d\" (UID: \"0c715187-1c04-46a5-9cd9-a68c82ef477d\") " Feb 27 19:36:26 crc kubenswrapper[4981]: I0227 19:36:26.747519 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c715187-1c04-46a5-9cd9-a68c82ef477d-utilities\") pod \"0c715187-1c04-46a5-9cd9-a68c82ef477d\" (UID: \"0c715187-1c04-46a5-9cd9-a68c82ef477d\") " Feb 27 19:36:26 crc kubenswrapper[4981]: I0227 19:36:26.748790 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c715187-1c04-46a5-9cd9-a68c82ef477d-utilities" (OuterVolumeSpecName: "utilities") pod "0c715187-1c04-46a5-9cd9-a68c82ef477d" (UID: "0c715187-1c04-46a5-9cd9-a68c82ef477d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:36:26 crc kubenswrapper[4981]: I0227 19:36:26.754229 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c715187-1c04-46a5-9cd9-a68c82ef477d-kube-api-access-skhgc" (OuterVolumeSpecName: "kube-api-access-skhgc") pod "0c715187-1c04-46a5-9cd9-a68c82ef477d" (UID: "0c715187-1c04-46a5-9cd9-a68c82ef477d"). InnerVolumeSpecName "kube-api-access-skhgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:36:26 crc kubenswrapper[4981]: I0227 19:36:26.849695 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skhgc\" (UniqueName: \"kubernetes.io/projected/0c715187-1c04-46a5-9cd9-a68c82ef477d-kube-api-access-skhgc\") on node \"crc\" DevicePath \"\"" Feb 27 19:36:26 crc kubenswrapper[4981]: I0227 19:36:26.849748 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c715187-1c04-46a5-9cd9-a68c82ef477d-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:36:27 crc kubenswrapper[4981]: I0227 19:36:27.130398 4981 generic.go:334] "Generic (PLEG): container finished" podID="0c715187-1c04-46a5-9cd9-a68c82ef477d" containerID="5f814a219b262fa72f43839cbea9da6ea0b37f6c8d07549ccd9ffa331252249a" exitCode=0 Feb 27 19:36:27 crc kubenswrapper[4981]: I0227 19:36:27.130437 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b7dwm" Feb 27 19:36:27 crc kubenswrapper[4981]: I0227 19:36:27.130462 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7dwm" event={"ID":"0c715187-1c04-46a5-9cd9-a68c82ef477d","Type":"ContainerDied","Data":"5f814a219b262fa72f43839cbea9da6ea0b37f6c8d07549ccd9ffa331252249a"} Feb 27 19:36:27 crc kubenswrapper[4981]: I0227 19:36:27.130496 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7dwm" event={"ID":"0c715187-1c04-46a5-9cd9-a68c82ef477d","Type":"ContainerDied","Data":"eee6a1c2fb538ffd5baa070432e505efed90b05ae0f568e7b0fcca63c3a2efc9"} Feb 27 19:36:27 crc kubenswrapper[4981]: I0227 19:36:27.130514 4981 scope.go:117] "RemoveContainer" containerID="5f814a219b262fa72f43839cbea9da6ea0b37f6c8d07549ccd9ffa331252249a" Feb 27 19:36:27 crc kubenswrapper[4981]: I0227 19:36:27.149229 4981 scope.go:117] "RemoveContainer" containerID="6300b531b23f70ed9fedd8de81c5cb865268ba2b7b8460201cb43719cec7360a" Feb 27 19:36:27 crc kubenswrapper[4981]: I0227 19:36:27.174020 4981 scope.go:117] "RemoveContainer" containerID="2d8e67414e0363605539f0e35439d03fba7a5232af60344c39a2db8b5b9c2279" Feb 27 19:36:27 crc kubenswrapper[4981]: I0227 19:36:27.196522 4981 scope.go:117] "RemoveContainer" containerID="5f814a219b262fa72f43839cbea9da6ea0b37f6c8d07549ccd9ffa331252249a" Feb 27 19:36:27 crc kubenswrapper[4981]: E0227 19:36:27.196972 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f814a219b262fa72f43839cbea9da6ea0b37f6c8d07549ccd9ffa331252249a\": container with ID starting with 5f814a219b262fa72f43839cbea9da6ea0b37f6c8d07549ccd9ffa331252249a not found: ID does not exist" containerID="5f814a219b262fa72f43839cbea9da6ea0b37f6c8d07549ccd9ffa331252249a" Feb 27 19:36:27 crc kubenswrapper[4981]: I0227 19:36:27.197008 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f814a219b262fa72f43839cbea9da6ea0b37f6c8d07549ccd9ffa331252249a"} err="failed to get container status \"5f814a219b262fa72f43839cbea9da6ea0b37f6c8d07549ccd9ffa331252249a\": rpc error: code = NotFound desc = could not find container \"5f814a219b262fa72f43839cbea9da6ea0b37f6c8d07549ccd9ffa331252249a\": container with ID starting with 5f814a219b262fa72f43839cbea9da6ea0b37f6c8d07549ccd9ffa331252249a not found: ID does not exist" Feb 27 19:36:27 crc kubenswrapper[4981]: I0227 19:36:27.197029 4981 scope.go:117] "RemoveContainer" containerID="6300b531b23f70ed9fedd8de81c5cb865268ba2b7b8460201cb43719cec7360a" Feb 27 19:36:27 crc kubenswrapper[4981]: E0227 19:36:27.197263 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6300b531b23f70ed9fedd8de81c5cb865268ba2b7b8460201cb43719cec7360a\": container with ID starting with 6300b531b23f70ed9fedd8de81c5cb865268ba2b7b8460201cb43719cec7360a not found: ID does not exist" containerID="6300b531b23f70ed9fedd8de81c5cb865268ba2b7b8460201cb43719cec7360a" Feb 27 19:36:27 crc kubenswrapper[4981]: I0227 19:36:27.197291 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6300b531b23f70ed9fedd8de81c5cb865268ba2b7b8460201cb43719cec7360a"} err="failed to get container status \"6300b531b23f70ed9fedd8de81c5cb865268ba2b7b8460201cb43719cec7360a\": rpc error: code = NotFound desc = could not find container \"6300b531b23f70ed9fedd8de81c5cb865268ba2b7b8460201cb43719cec7360a\": container with ID starting with 6300b531b23f70ed9fedd8de81c5cb865268ba2b7b8460201cb43719cec7360a not found: ID does not exist" Feb 27 19:36:27 crc kubenswrapper[4981]: I0227 19:36:27.197309 4981 scope.go:117] "RemoveContainer" containerID="2d8e67414e0363605539f0e35439d03fba7a5232af60344c39a2db8b5b9c2279" Feb 27 19:36:27 crc kubenswrapper[4981]: E0227 19:36:27.197624 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d8e67414e0363605539f0e35439d03fba7a5232af60344c39a2db8b5b9c2279\": container with ID starting with 2d8e67414e0363605539f0e35439d03fba7a5232af60344c39a2db8b5b9c2279 not found: ID does not exist" containerID="2d8e67414e0363605539f0e35439d03fba7a5232af60344c39a2db8b5b9c2279" Feb 27 19:36:27 crc kubenswrapper[4981]: I0227 19:36:27.197642 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d8e67414e0363605539f0e35439d03fba7a5232af60344c39a2db8b5b9c2279"} err="failed to get container status \"2d8e67414e0363605539f0e35439d03fba7a5232af60344c39a2db8b5b9c2279\": rpc error: code = NotFound desc = could not find container \"2d8e67414e0363605539f0e35439d03fba7a5232af60344c39a2db8b5b9c2279\": container with ID starting with 2d8e67414e0363605539f0e35439d03fba7a5232af60344c39a2db8b5b9c2279 not found: ID does not exist" Feb 27 19:36:27 crc kubenswrapper[4981]: I0227 19:36:27.438151 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c715187-1c04-46a5-9cd9-a68c82ef477d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c715187-1c04-46a5-9cd9-a68c82ef477d" (UID: "0c715187-1c04-46a5-9cd9-a68c82ef477d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:36:27 crc kubenswrapper[4981]: I0227 19:36:27.457427 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c715187-1c04-46a5-9cd9-a68c82ef477d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:36:27 crc kubenswrapper[4981]: I0227 19:36:27.752523 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b7dwm"] Feb 27 19:36:27 crc kubenswrapper[4981]: I0227 19:36:27.761288 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b7dwm"] Feb 27 19:36:29 crc kubenswrapper[4981]: I0227 19:36:29.646390 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c715187-1c04-46a5-9cd9-a68c82ef477d" path="/var/lib/kubelet/pods/0c715187-1c04-46a5-9cd9-a68c82ef477d/volumes" Feb 27 19:36:39 crc kubenswrapper[4981]: I0227 19:36:39.628026 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:36:39 crc kubenswrapper[4981]: E0227 19:36:39.628838 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:36:51 crc kubenswrapper[4981]: I0227 19:36:51.632339 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:36:51 crc kubenswrapper[4981]: E0227 19:36:51.633047 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:37:04 crc kubenswrapper[4981]: I0227 19:37:04.628217 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:37:04 crc kubenswrapper[4981]: E0227 19:37:04.629077 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:37:15 crc kubenswrapper[4981]: I0227 19:37:15.628785 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:37:15 crc kubenswrapper[4981]: E0227 19:37:15.629485 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:37:27 crc kubenswrapper[4981]: I0227 19:37:27.628615 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:37:27 crc kubenswrapper[4981]: E0227 19:37:27.629339 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:37:42 crc kubenswrapper[4981]: I0227 19:37:42.628808 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:37:42 crc kubenswrapper[4981]: E0227 19:37:42.629363 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:37:57 crc kubenswrapper[4981]: I0227 19:37:57.628114 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:37:57 crc kubenswrapper[4981]: E0227 19:37:57.628699 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:38:00 crc kubenswrapper[4981]: I0227 19:38:00.154299 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537018-x6blf"] Feb 27 19:38:00 crc kubenswrapper[4981]: E0227 19:38:00.155251 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c715187-1c04-46a5-9cd9-a68c82ef477d" containerName="extract-utilities" Feb 27 19:38:00 crc kubenswrapper[4981]: I0227 19:38:00.155278 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c715187-1c04-46a5-9cd9-a68c82ef477d" containerName="extract-utilities" Feb 27 19:38:00 crc kubenswrapper[4981]: E0227 19:38:00.155314 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5cea75-9d0c-4908-9e15-e19cd3a1b925" containerName="oc" Feb 27 19:38:00 crc kubenswrapper[4981]: I0227 19:38:00.155327 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5cea75-9d0c-4908-9e15-e19cd3a1b925" containerName="oc" Feb 27 19:38:00 crc kubenswrapper[4981]: E0227 19:38:00.155343 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c715187-1c04-46a5-9cd9-a68c82ef477d" containerName="registry-server" Feb 27 19:38:00 crc kubenswrapper[4981]: I0227 19:38:00.155357 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c715187-1c04-46a5-9cd9-a68c82ef477d" containerName="registry-server" Feb 27 19:38:00 crc kubenswrapper[4981]: E0227 19:38:00.155381 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c715187-1c04-46a5-9cd9-a68c82ef477d" containerName="extract-content" Feb 27 19:38:00 crc kubenswrapper[4981]: I0227 19:38:00.155393 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c715187-1c04-46a5-9cd9-a68c82ef477d" containerName="extract-content" Feb 27 19:38:00 crc kubenswrapper[4981]: I0227 19:38:00.155696 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c715187-1c04-46a5-9cd9-a68c82ef477d" containerName="registry-server" Feb 27 19:38:00 crc kubenswrapper[4981]: I0227 19:38:00.155741 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e5cea75-9d0c-4908-9e15-e19cd3a1b925" containerName="oc" Feb 27 19:38:00 crc kubenswrapper[4981]: I0227 19:38:00.156450 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537018-x6blf" Feb 27 19:38:00 crc kubenswrapper[4981]: I0227 19:38:00.159414 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:38:00 crc kubenswrapper[4981]: I0227 19:38:00.159473 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:38:00 crc kubenswrapper[4981]: I0227 19:38:00.159774 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:38:00 crc kubenswrapper[4981]: I0227 19:38:00.165517 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537018-x6blf"] Feb 27 19:38:00 crc kubenswrapper[4981]: I0227 19:38:00.245303 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzfh2\" (UniqueName: \"kubernetes.io/projected/52c2b304-bd7b-45a7-87da-025e79b1733e-kube-api-access-kzfh2\") pod \"auto-csr-approver-29537018-x6blf\" (UID: \"52c2b304-bd7b-45a7-87da-025e79b1733e\") " pod="openshift-infra/auto-csr-approver-29537018-x6blf" Feb 27 19:38:00 crc kubenswrapper[4981]: I0227 19:38:00.346307 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzfh2\" (UniqueName: \"kubernetes.io/projected/52c2b304-bd7b-45a7-87da-025e79b1733e-kube-api-access-kzfh2\") pod \"auto-csr-approver-29537018-x6blf\" (UID: \"52c2b304-bd7b-45a7-87da-025e79b1733e\") " pod="openshift-infra/auto-csr-approver-29537018-x6blf" Feb 27 19:38:00 crc kubenswrapper[4981]: I0227 19:38:00.365355 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzfh2\" (UniqueName: \"kubernetes.io/projected/52c2b304-bd7b-45a7-87da-025e79b1733e-kube-api-access-kzfh2\") pod \"auto-csr-approver-29537018-x6blf\" (UID: \"52c2b304-bd7b-45a7-87da-025e79b1733e\") " pod="openshift-infra/auto-csr-approver-29537018-x6blf" Feb 27 19:38:00 crc kubenswrapper[4981]: I0227 19:38:00.473883 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537018-x6blf" Feb 27 19:38:00 crc kubenswrapper[4981]: I0227 19:38:00.858733 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537018-x6blf"] Feb 27 19:38:00 crc kubenswrapper[4981]: I0227 19:38:00.868485 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:38:01 crc kubenswrapper[4981]: I0227 19:38:01.765512 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537018-x6blf" event={"ID":"52c2b304-bd7b-45a7-87da-025e79b1733e","Type":"ContainerStarted","Data":"9020cc402cd3323326f6b1989dc6381fe902aa3c0510fa6aa1e40d52b66d0938"} Feb 27 19:38:02 crc kubenswrapper[4981]: E0227 19:38:02.004609 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:38:02 crc kubenswrapper[4981]: E0227 19:38:02.004743 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:38:02 crc kubenswrapper[4981]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:38:02 crc kubenswrapper[4981]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kzfh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537018-x6blf_openshift-infra(52c2b304-bd7b-45a7-87da-025e79b1733e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:38:02 crc kubenswrapper[4981]: > logger="UnhandledError" Feb 27 19:38:02 crc kubenswrapper[4981]: E0227 19:38:02.005975 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537018-x6blf" podUID="52c2b304-bd7b-45a7-87da-025e79b1733e" Feb 27 19:38:02 crc kubenswrapper[4981]: E0227 19:38:02.780761 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-x6blf" podUID="52c2b304-bd7b-45a7-87da-025e79b1733e" Feb 27 19:38:09 crc kubenswrapper[4981]: I0227 19:38:09.628907 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:38:09 crc kubenswrapper[4981]: E0227 19:38:09.629722 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:38:16 crc kubenswrapper[4981]: I0227 19:38:16.881836 4981 generic.go:334] "Generic (PLEG): container finished" podID="52c2b304-bd7b-45a7-87da-025e79b1733e" containerID="9e9fdeb7164db6c637325ff531c99cd49a7b7b0464d40bc20e12352c42ce9141" exitCode=0 Feb 27 19:38:16 crc kubenswrapper[4981]: I0227 19:38:16.881945 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537018-x6blf" event={"ID":"52c2b304-bd7b-45a7-87da-025e79b1733e","Type":"ContainerDied","Data":"9e9fdeb7164db6c637325ff531c99cd49a7b7b0464d40bc20e12352c42ce9141"} Feb 27 19:38:18 crc kubenswrapper[4981]: I0227 19:38:18.145521 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537018-x6blf" Feb 27 19:38:18 crc kubenswrapper[4981]: I0227 19:38:18.299911 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzfh2\" (UniqueName: \"kubernetes.io/projected/52c2b304-bd7b-45a7-87da-025e79b1733e-kube-api-access-kzfh2\") pod \"52c2b304-bd7b-45a7-87da-025e79b1733e\" (UID: \"52c2b304-bd7b-45a7-87da-025e79b1733e\") " Feb 27 19:38:18 crc kubenswrapper[4981]: I0227 19:38:18.305285 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52c2b304-bd7b-45a7-87da-025e79b1733e-kube-api-access-kzfh2" (OuterVolumeSpecName: "kube-api-access-kzfh2") pod "52c2b304-bd7b-45a7-87da-025e79b1733e" (UID: "52c2b304-bd7b-45a7-87da-025e79b1733e"). InnerVolumeSpecName "kube-api-access-kzfh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:38:18 crc kubenswrapper[4981]: I0227 19:38:18.401495 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzfh2\" (UniqueName: \"kubernetes.io/projected/52c2b304-bd7b-45a7-87da-025e79b1733e-kube-api-access-kzfh2\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:18 crc kubenswrapper[4981]: I0227 19:38:18.899798 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537018-x6blf" event={"ID":"52c2b304-bd7b-45a7-87da-025e79b1733e","Type":"ContainerDied","Data":"9020cc402cd3323326f6b1989dc6381fe902aa3c0510fa6aa1e40d52b66d0938"} Feb 27 19:38:18 crc kubenswrapper[4981]: I0227 19:38:18.899832 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9020cc402cd3323326f6b1989dc6381fe902aa3c0510fa6aa1e40d52b66d0938" Feb 27 19:38:18 crc kubenswrapper[4981]: I0227 19:38:18.899882 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537018-x6blf" Feb 27 19:38:19 crc kubenswrapper[4981]: I0227 19:38:19.236289 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537012-9r2bh"] Feb 27 19:38:19 crc kubenswrapper[4981]: I0227 19:38:19.241451 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537012-9r2bh"] Feb 27 19:38:19 crc kubenswrapper[4981]: I0227 19:38:19.637981 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833" path="/var/lib/kubelet/pods/16d1aa2c-10fa-4d14-a4b7-f6ba9bc3a833/volumes" Feb 27 19:38:20 crc kubenswrapper[4981]: I0227 19:38:20.630153 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:38:20 crc kubenswrapper[4981]: E0227 19:38:20.630518 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:38:23 crc kubenswrapper[4981]: I0227 19:38:23.452672 4981 scope.go:117] "RemoveContainer" containerID="13f255d9dbe7493ed50af7be68beed69e6168aa161f058528461485d341393c5" Feb 27 19:38:34 crc kubenswrapper[4981]: I0227 19:38:34.628725 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:38:34 crc kubenswrapper[4981]: E0227 19:38:34.629524 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:38:48 crc kubenswrapper[4981]: I0227 19:38:48.628532 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:38:48 crc kubenswrapper[4981]: E0227 19:38:48.629410 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:39:02 crc kubenswrapper[4981]: I0227 19:39:02.628284 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:39:02 crc kubenswrapper[4981]: E0227 19:39:02.628962 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:39:13 crc kubenswrapper[4981]: I0227 19:39:13.629284 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:39:13 crc kubenswrapper[4981]: E0227 19:39:13.630150 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:39:25 crc kubenswrapper[4981]: I0227 19:39:25.628825 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:39:25 crc kubenswrapper[4981]: E0227 19:39:25.629481 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:39:40 crc kubenswrapper[4981]: I0227 19:39:40.628092 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:39:40 crc kubenswrapper[4981]: E0227 19:39:40.628724 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:39:53 crc kubenswrapper[4981]: I0227 19:39:53.629231 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:39:53 crc kubenswrapper[4981]: E0227 19:39:53.630027 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:40:00 crc kubenswrapper[4981]: I0227 19:40:00.143280 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537020-747wv"] Feb 27 19:40:00 crc kubenswrapper[4981]: E0227 19:40:00.144293 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52c2b304-bd7b-45a7-87da-025e79b1733e" containerName="oc" Feb 27 19:40:00 crc kubenswrapper[4981]: I0227 19:40:00.144309 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="52c2b304-bd7b-45a7-87da-025e79b1733e" containerName="oc" Feb 27 19:40:00 crc kubenswrapper[4981]: I0227 19:40:00.144505 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="52c2b304-bd7b-45a7-87da-025e79b1733e" containerName="oc" Feb 27 19:40:00 crc kubenswrapper[4981]: I0227 19:40:00.144948 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537020-747wv" Feb 27 19:40:00 crc kubenswrapper[4981]: I0227 19:40:00.151132 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:40:00 crc kubenswrapper[4981]: I0227 19:40:00.151327 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:40:00 crc kubenswrapper[4981]: I0227 19:40:00.160162 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:40:00 crc kubenswrapper[4981]: I0227 19:40:00.166383 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537020-747wv"] Feb 27 19:40:00 crc kubenswrapper[4981]: I0227 19:40:00.319323 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl658\" (UniqueName: \"kubernetes.io/projected/dad48811-a01e-4ebe-ae29-0a45059740cb-kube-api-access-jl658\") pod \"auto-csr-approver-29537020-747wv\" (UID: \"dad48811-a01e-4ebe-ae29-0a45059740cb\") " pod="openshift-infra/auto-csr-approver-29537020-747wv" Feb 27 19:40:00 crc kubenswrapper[4981]: I0227 19:40:00.420959 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl658\" (UniqueName: \"kubernetes.io/projected/dad48811-a01e-4ebe-ae29-0a45059740cb-kube-api-access-jl658\") pod \"auto-csr-approver-29537020-747wv\" (UID: \"dad48811-a01e-4ebe-ae29-0a45059740cb\") " pod="openshift-infra/auto-csr-approver-29537020-747wv" Feb 27 19:40:00 crc kubenswrapper[4981]: I0227 19:40:00.443897 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl658\" (UniqueName: \"kubernetes.io/projected/dad48811-a01e-4ebe-ae29-0a45059740cb-kube-api-access-jl658\") pod \"auto-csr-approver-29537020-747wv\" (UID: \"dad48811-a01e-4ebe-ae29-0a45059740cb\") " pod="openshift-infra/auto-csr-approver-29537020-747wv" Feb 27 19:40:00 crc kubenswrapper[4981]: I0227 19:40:00.463146 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537020-747wv" Feb 27 19:40:00 crc kubenswrapper[4981]: I0227 19:40:00.865318 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537020-747wv"] Feb 27 19:40:01 crc kubenswrapper[4981]: I0227 19:40:01.607757 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537020-747wv" event={"ID":"dad48811-a01e-4ebe-ae29-0a45059740cb","Type":"ContainerStarted","Data":"6464d3920b92c4357ea4f72c177456193a16859c01b08ff8a7ebec060f3ead07"} Feb 27 19:40:02 crc kubenswrapper[4981]: E0227 19:40:02.370737 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:40:02 crc kubenswrapper[4981]: E0227 19:40:02.370882 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:40:02 crc kubenswrapper[4981]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:40:02 crc kubenswrapper[4981]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jl658,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537020-747wv_openshift-infra(dad48811-a01e-4ebe-ae29-0a45059740cb): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:40:02 crc kubenswrapper[4981]: > logger="UnhandledError" Feb 27 19:40:02 crc kubenswrapper[4981]: E0227 19:40:02.372104 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537020-747wv" podUID="dad48811-a01e-4ebe-ae29-0a45059740cb" Feb 27 19:40:02 crc kubenswrapper[4981]: E0227 19:40:02.616548 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537020-747wv" podUID="dad48811-a01e-4ebe-ae29-0a45059740cb" Feb 27 19:40:05 crc kubenswrapper[4981]: I0227 19:40:05.628424 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:40:05 crc kubenswrapper[4981]: E0227 19:40:05.628911 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:40:19 crc kubenswrapper[4981]: I0227 19:40:19.629238 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:40:19 crc kubenswrapper[4981]: E0227 19:40:19.629943 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:40:19 crc kubenswrapper[4981]: I0227 19:40:19.728388 4981 generic.go:334] "Generic (PLEG): container finished" podID="dad48811-a01e-4ebe-ae29-0a45059740cb" containerID="556f734cf083a04b4806680b2f6c5ec7622329a6d99945d18939de10ed3c151a" exitCode=0 Feb 27 19:40:19 crc kubenswrapper[4981]: I0227 19:40:19.728424 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537020-747wv" event={"ID":"dad48811-a01e-4ebe-ae29-0a45059740cb","Type":"ContainerDied","Data":"556f734cf083a04b4806680b2f6c5ec7622329a6d99945d18939de10ed3c151a"} Feb 27 19:40:20 crc kubenswrapper[4981]: I0227 19:40:20.972726 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537020-747wv" Feb 27 19:40:21 crc kubenswrapper[4981]: I0227 19:40:21.110793 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl658\" (UniqueName: \"kubernetes.io/projected/dad48811-a01e-4ebe-ae29-0a45059740cb-kube-api-access-jl658\") pod \"dad48811-a01e-4ebe-ae29-0a45059740cb\" (UID: \"dad48811-a01e-4ebe-ae29-0a45059740cb\") " Feb 27 19:40:21 crc kubenswrapper[4981]: I0227 19:40:21.115493 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad48811-a01e-4ebe-ae29-0a45059740cb-kube-api-access-jl658" (OuterVolumeSpecName: "kube-api-access-jl658") pod "dad48811-a01e-4ebe-ae29-0a45059740cb" (UID: "dad48811-a01e-4ebe-ae29-0a45059740cb"). InnerVolumeSpecName "kube-api-access-jl658". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:40:21 crc kubenswrapper[4981]: I0227 19:40:21.212689 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl658\" (UniqueName: \"kubernetes.io/projected/dad48811-a01e-4ebe-ae29-0a45059740cb-kube-api-access-jl658\") on node \"crc\" DevicePath \"\"" Feb 27 19:40:21 crc kubenswrapper[4981]: I0227 19:40:21.743907 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537020-747wv" event={"ID":"dad48811-a01e-4ebe-ae29-0a45059740cb","Type":"ContainerDied","Data":"6464d3920b92c4357ea4f72c177456193a16859c01b08ff8a7ebec060f3ead07"} Feb 27 19:40:21 crc kubenswrapper[4981]: I0227 19:40:21.743954 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6464d3920b92c4357ea4f72c177456193a16859c01b08ff8a7ebec060f3ead07" Feb 27 19:40:21 crc kubenswrapper[4981]: I0227 19:40:21.743977 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537020-747wv" Feb 27 19:40:22 crc kubenswrapper[4981]: I0227 19:40:22.036347 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537014-wh9vm"] Feb 27 19:40:22 crc kubenswrapper[4981]: I0227 19:40:22.042545 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537014-wh9vm"] Feb 27 19:40:23 crc kubenswrapper[4981]: I0227 19:40:23.636805 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49394faf-2559-4c97-9601-df2b3c500a1e" path="/var/lib/kubelet/pods/49394faf-2559-4c97-9601-df2b3c500a1e/volumes" Feb 27 19:40:31 crc kubenswrapper[4981]: I0227 19:40:31.640246 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:40:31 crc kubenswrapper[4981]: I0227 19:40:31.822337 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerStarted","Data":"7e7587b85d64eef60150f43bd74b50c7e64a21cc95eaa522a2a9bc99615746d6"} Feb 27 19:41:23 crc kubenswrapper[4981]: I0227 19:41:23.574991 4981 scope.go:117] "RemoveContainer" containerID="f82a49db1780a2eaaf253e0c71e3c58df790606b7b497dd9fc1ed095afc09a59" Feb 27 19:41:57 crc kubenswrapper[4981]: I0227 19:41:57.210406 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tpczw/must-gather-hrzrj"] Feb 27 19:41:57 crc kubenswrapper[4981]: E0227 19:41:57.211136 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad48811-a01e-4ebe-ae29-0a45059740cb" containerName="oc" Feb 27 19:41:57 crc kubenswrapper[4981]: I0227 19:41:57.211150 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad48811-a01e-4ebe-ae29-0a45059740cb" containerName="oc" Feb 27 19:41:57 crc kubenswrapper[4981]: I0227 19:41:57.211314 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad48811-a01e-4ebe-ae29-0a45059740cb" containerName="oc" Feb 27 19:41:57 crc kubenswrapper[4981]: I0227 19:41:57.211970 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tpczw/must-gather-hrzrj" Feb 27 19:41:57 crc kubenswrapper[4981]: I0227 19:41:57.242684 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47f8e1a3-65ad-49a9-8a05-b927be5a5373-must-gather-output\") pod \"must-gather-hrzrj\" (UID: \"47f8e1a3-65ad-49a9-8a05-b927be5a5373\") " pod="openshift-must-gather-tpczw/must-gather-hrzrj" Feb 27 19:41:57 crc kubenswrapper[4981]: I0227 19:41:57.243108 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpflm\" (UniqueName: \"kubernetes.io/projected/47f8e1a3-65ad-49a9-8a05-b927be5a5373-kube-api-access-xpflm\") pod \"must-gather-hrzrj\" (UID: \"47f8e1a3-65ad-49a9-8a05-b927be5a5373\") " pod="openshift-must-gather-tpczw/must-gather-hrzrj" Feb 27 19:41:57 crc kubenswrapper[4981]: I0227 19:41:57.248830 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tpczw"/"openshift-service-ca.crt" Feb 27 19:41:57 crc kubenswrapper[4981]: I0227 19:41:57.250932 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tpczw"/"kube-root-ca.crt" Feb 27 19:41:57 crc kubenswrapper[4981]: I0227 19:41:57.289081 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tpczw/must-gather-hrzrj"] Feb 27 19:41:57 crc kubenswrapper[4981]: I0227 19:41:57.344025 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpflm\" (UniqueName: \"kubernetes.io/projected/47f8e1a3-65ad-49a9-8a05-b927be5a5373-kube-api-access-xpflm\") pod \"must-gather-hrzrj\" (UID: \"47f8e1a3-65ad-49a9-8a05-b927be5a5373\") " pod="openshift-must-gather-tpczw/must-gather-hrzrj" Feb 27 19:41:57 crc kubenswrapper[4981]: I0227 19:41:57.344109 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47f8e1a3-65ad-49a9-8a05-b927be5a5373-must-gather-output\") pod \"must-gather-hrzrj\" (UID: \"47f8e1a3-65ad-49a9-8a05-b927be5a5373\") " pod="openshift-must-gather-tpczw/must-gather-hrzrj" Feb 27 19:41:57 crc kubenswrapper[4981]: I0227 19:41:57.344503 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47f8e1a3-65ad-49a9-8a05-b927be5a5373-must-gather-output\") pod \"must-gather-hrzrj\" (UID: \"47f8e1a3-65ad-49a9-8a05-b927be5a5373\") " pod="openshift-must-gather-tpczw/must-gather-hrzrj" Feb 27 19:41:57 crc kubenswrapper[4981]: I0227 19:41:57.401915 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpflm\" (UniqueName: \"kubernetes.io/projected/47f8e1a3-65ad-49a9-8a05-b927be5a5373-kube-api-access-xpflm\") pod \"must-gather-hrzrj\" (UID: \"47f8e1a3-65ad-49a9-8a05-b927be5a5373\") " pod="openshift-must-gather-tpczw/must-gather-hrzrj" Feb 27 19:41:57 crc kubenswrapper[4981]: I0227 19:41:57.528659 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tpczw/must-gather-hrzrj" Feb 27 19:41:57 crc kubenswrapper[4981]: I0227 19:41:57.830104 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tpczw/must-gather-hrzrj"] Feb 27 19:41:58 crc kubenswrapper[4981]: I0227 19:41:58.397291 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tpczw/must-gather-hrzrj" event={"ID":"47f8e1a3-65ad-49a9-8a05-b927be5a5373","Type":"ContainerStarted","Data":"00543db9bdf179ecbcac1d21bd5a729f7a55c3aa3e677079afd4bb88ec040c03"} Feb 27 19:42:00 crc kubenswrapper[4981]: I0227 19:42:00.148315 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537022-gk6n2"] Feb 27 19:42:00 crc kubenswrapper[4981]: I0227 19:42:00.149539 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" Feb 27 19:42:00 crc kubenswrapper[4981]: I0227 19:42:00.151799 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:42:00 crc kubenswrapper[4981]: I0227 19:42:00.151825 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5pdhf" Feb 27 19:42:00 crc kubenswrapper[4981]: I0227 19:42:00.152100 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:42:00 crc kubenswrapper[4981]: I0227 19:42:00.158004 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537022-gk6n2"] Feb 27 19:42:00 crc kubenswrapper[4981]: I0227 19:42:00.284521 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qphf7\" (UniqueName: \"kubernetes.io/projected/e853497d-5551-44e1-82d2-9915151f5e46-kube-api-access-qphf7\") pod \"auto-csr-approver-29537022-gk6n2\" (UID: \"e853497d-5551-44e1-82d2-9915151f5e46\") " pod="openshift-infra/auto-csr-approver-29537022-gk6n2" Feb 27 19:42:00 crc kubenswrapper[4981]: I0227 19:42:00.386288 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qphf7\" (UniqueName: \"kubernetes.io/projected/e853497d-5551-44e1-82d2-9915151f5e46-kube-api-access-qphf7\") pod \"auto-csr-approver-29537022-gk6n2\" (UID: \"e853497d-5551-44e1-82d2-9915151f5e46\") " pod="openshift-infra/auto-csr-approver-29537022-gk6n2" Feb 27 19:42:00 crc kubenswrapper[4981]: I0227 19:42:00.417947 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qphf7\" (UniqueName: \"kubernetes.io/projected/e853497d-5551-44e1-82d2-9915151f5e46-kube-api-access-qphf7\") pod \"auto-csr-approver-29537022-gk6n2\" (UID: \"e853497d-5551-44e1-82d2-9915151f5e46\") " pod="openshift-infra/auto-csr-approver-29537022-gk6n2" Feb 27 19:42:00 crc kubenswrapper[4981]: I0227 19:42:00.475955 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" Feb 27 19:42:03 crc kubenswrapper[4981]: I0227 19:42:03.853578 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537022-gk6n2"] Feb 27 19:42:04 crc kubenswrapper[4981]: I0227 19:42:04.439535 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" event={"ID":"e853497d-5551-44e1-82d2-9915151f5e46","Type":"ContainerStarted","Data":"589d7bc5d17e02e2828f63ac89e3ed5c8c64c25c62c428e14f70acf803f7373b"} Feb 27 19:42:04 crc kubenswrapper[4981]: I0227 19:42:04.442080 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tpczw/must-gather-hrzrj" event={"ID":"47f8e1a3-65ad-49a9-8a05-b927be5a5373","Type":"ContainerStarted","Data":"cc5ccc2ac1adad43ff5fdeea62e47ae1f3a52f09c767f1c3f4ad4c7f4165074b"} Feb 27 19:42:04 crc kubenswrapper[4981]: I0227 19:42:04.442115 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tpczw/must-gather-hrzrj" event={"ID":"47f8e1a3-65ad-49a9-8a05-b927be5a5373","Type":"ContainerStarted","Data":"6642535209ddf82451b534b5e0b0cdc8f44e22ab0f4b9313b765e6b435bf831d"} Feb 27 19:42:11 crc kubenswrapper[4981]: I0227 19:42:11.646898 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tpczw/must-gather-hrzrj" podStartSLOduration=8.657312507 podStartE2EDuration="14.646879655s" podCreationTimestamp="2026-02-27 19:41:57 +0000 UTC" firstStartedPulling="2026-02-27 19:41:57.83681868 +0000 UTC m=+3417.315599840" lastFinishedPulling="2026-02-27 19:42:03.826385818 +0000 UTC m=+3423.305166988" observedRunningTime="2026-02-27 19:42:04.459553608 +0000 UTC m=+3423.938334798" watchObservedRunningTime="2026-02-27 19:42:11.646879655 +0000 UTC m=+3431.125660815" Feb 27 19:42:21 crc kubenswrapper[4981]: E0227 19:42:21.558446 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: determining manifest MIME type for docker://registry.redhat.io/openshift4/ose-cli:latest: reading manifest sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9 in registry.redhat.io/openshift4/ose-cli: received unexpected HTTP status: 500 Internal Server Error" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:42:21 crc kubenswrapper[4981]: E0227 19:42:21.559187 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:42:21 crc kubenswrapper[4981]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:42:21 crc kubenswrapper[4981]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qphf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537022-gk6n2_openshift-infra(e853497d-5551-44e1-82d2-9915151f5e46): ErrImagePull: copying system image from manifest list: determining manifest MIME type for docker://registry.redhat.io/openshift4/ose-cli:latest: reading manifest sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9 in registry.redhat.io/openshift4/ose-cli: received unexpected HTTP status: 500 Internal Server Error Feb 27 19:42:21 crc kubenswrapper[4981]: > logger="UnhandledError" Feb 27 19:42:21 crc kubenswrapper[4981]: E0227 19:42:21.560385 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: determining manifest MIME type for docker://registry.redhat.io/openshift4/ose-cli:latest: reading manifest sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9 in registry.redhat.io/openshift4/ose-cli: received unexpected HTTP status: 500 Internal Server Error\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:42:22 crc kubenswrapper[4981]: E0227 19:42:22.565103 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:42:37 crc kubenswrapper[4981]: E0227 19:42:37.570413 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:42:37 crc kubenswrapper[4981]: E0227 19:42:37.571100 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:42:37 crc kubenswrapper[4981]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:42:37 crc kubenswrapper[4981]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qphf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537022-gk6n2_openshift-infra(e853497d-5551-44e1-82d2-9915151f5e46): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:42:37 crc kubenswrapper[4981]: > logger="UnhandledError" Feb 27 19:42:37 crc kubenswrapper[4981]: E0227 19:42:37.572276 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:42:41 crc kubenswrapper[4981]: I0227 19:42:41.720799 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f9ltj"] Feb 27 19:42:41 crc kubenswrapper[4981]: I0227 19:42:41.722955 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9ltj" Feb 27 19:42:41 crc kubenswrapper[4981]: I0227 19:42:41.735631 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9ltj"] Feb 27 19:42:41 crc kubenswrapper[4981]: I0227 19:42:41.849927 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64c273de-1f65-4ec7-b2a0-c070e4d29ce6-catalog-content\") pod \"redhat-operators-f9ltj\" (UID: \"64c273de-1f65-4ec7-b2a0-c070e4d29ce6\") " pod="openshift-marketplace/redhat-operators-f9ltj" Feb 27 19:42:41 crc kubenswrapper[4981]: I0227 19:42:41.850036 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrpvp\" (UniqueName: \"kubernetes.io/projected/64c273de-1f65-4ec7-b2a0-c070e4d29ce6-kube-api-access-lrpvp\") pod \"redhat-operators-f9ltj\" (UID: \"64c273de-1f65-4ec7-b2a0-c070e4d29ce6\") " pod="openshift-marketplace/redhat-operators-f9ltj" Feb 27 19:42:41 crc kubenswrapper[4981]: I0227 19:42:41.850127 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64c273de-1f65-4ec7-b2a0-c070e4d29ce6-utilities\") pod \"redhat-operators-f9ltj\" (UID: \"64c273de-1f65-4ec7-b2a0-c070e4d29ce6\") " pod="openshift-marketplace/redhat-operators-f9ltj" Feb 27 19:42:41 crc kubenswrapper[4981]: I0227 19:42:41.916969 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s7jgl"] Feb 27 19:42:41 crc kubenswrapper[4981]: I0227 19:42:41.918431 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s7jgl" Feb 27 19:42:41 crc kubenswrapper[4981]: I0227 19:42:41.936787 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s7jgl"] Feb 27 19:42:41 crc kubenswrapper[4981]: I0227 19:42:41.951868 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrpvp\" (UniqueName: \"kubernetes.io/projected/64c273de-1f65-4ec7-b2a0-c070e4d29ce6-kube-api-access-lrpvp\") pod \"redhat-operators-f9ltj\" (UID: \"64c273de-1f65-4ec7-b2a0-c070e4d29ce6\") " pod="openshift-marketplace/redhat-operators-f9ltj" Feb 27 19:42:41 crc kubenswrapper[4981]: I0227 19:42:41.951991 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64c273de-1f65-4ec7-b2a0-c070e4d29ce6-utilities\") pod \"redhat-operators-f9ltj\" (UID: \"64c273de-1f65-4ec7-b2a0-c070e4d29ce6\") " pod="openshift-marketplace/redhat-operators-f9ltj" Feb 27 19:42:41 crc kubenswrapper[4981]: I0227 19:42:41.952047 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64c273de-1f65-4ec7-b2a0-c070e4d29ce6-catalog-content\") pod \"redhat-operators-f9ltj\" (UID: \"64c273de-1f65-4ec7-b2a0-c070e4d29ce6\") " pod="openshift-marketplace/redhat-operators-f9ltj" Feb 27 19:42:41 crc kubenswrapper[4981]: I0227 19:42:41.952611 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64c273de-1f65-4ec7-b2a0-c070e4d29ce6-catalog-content\") pod \"redhat-operators-f9ltj\" (UID: \"64c273de-1f65-4ec7-b2a0-c070e4d29ce6\") " pod="openshift-marketplace/redhat-operators-f9ltj" Feb 27 19:42:41 crc kubenswrapper[4981]: I0227 19:42:41.953307 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64c273de-1f65-4ec7-b2a0-c070e4d29ce6-utilities\") pod \"redhat-operators-f9ltj\" (UID: \"64c273de-1f65-4ec7-b2a0-c070e4d29ce6\") " pod="openshift-marketplace/redhat-operators-f9ltj" Feb 27 19:42:41 crc kubenswrapper[4981]: I0227 19:42:41.981401 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrpvp\" (UniqueName: \"kubernetes.io/projected/64c273de-1f65-4ec7-b2a0-c070e4d29ce6-kube-api-access-lrpvp\") pod \"redhat-operators-f9ltj\" (UID: \"64c273de-1f65-4ec7-b2a0-c070e4d29ce6\") " pod="openshift-marketplace/redhat-operators-f9ltj" Feb 27 19:42:42 crc kubenswrapper[4981]: I0227 19:42:42.043120 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9ltj" Feb 27 19:42:42 crc kubenswrapper[4981]: I0227 19:42:42.053776 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c006c9c-d6e0-46b9-af87-487c821d5593-utilities\") pod \"community-operators-s7jgl\" (UID: \"6c006c9c-d6e0-46b9-af87-487c821d5593\") " pod="openshift-marketplace/community-operators-s7jgl" Feb 27 19:42:42 crc kubenswrapper[4981]: I0227 19:42:42.053833 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm9pk\" (UniqueName: \"kubernetes.io/projected/6c006c9c-d6e0-46b9-af87-487c821d5593-kube-api-access-vm9pk\") pod \"community-operators-s7jgl\" (UID: \"6c006c9c-d6e0-46b9-af87-487c821d5593\") " pod="openshift-marketplace/community-operators-s7jgl" Feb 27 19:42:42 crc kubenswrapper[4981]: I0227 19:42:42.053870 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c006c9c-d6e0-46b9-af87-487c821d5593-catalog-content\") pod \"community-operators-s7jgl\" (UID: \"6c006c9c-d6e0-46b9-af87-487c821d5593\") " pod="openshift-marketplace/community-operators-s7jgl" Feb 27 19:42:42 crc kubenswrapper[4981]: I0227 19:42:42.155148 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm9pk\" (UniqueName: \"kubernetes.io/projected/6c006c9c-d6e0-46b9-af87-487c821d5593-kube-api-access-vm9pk\") pod \"community-operators-s7jgl\" (UID: \"6c006c9c-d6e0-46b9-af87-487c821d5593\") " pod="openshift-marketplace/community-operators-s7jgl" Feb 27 19:42:42 crc kubenswrapper[4981]: I0227 19:42:42.155474 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c006c9c-d6e0-46b9-af87-487c821d5593-catalog-content\") pod \"community-operators-s7jgl\" (UID: \"6c006c9c-d6e0-46b9-af87-487c821d5593\") " pod="openshift-marketplace/community-operators-s7jgl" Feb 27 19:42:42 crc kubenswrapper[4981]: I0227 19:42:42.155546 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c006c9c-d6e0-46b9-af87-487c821d5593-utilities\") pod \"community-operators-s7jgl\" (UID: \"6c006c9c-d6e0-46b9-af87-487c821d5593\") " pod="openshift-marketplace/community-operators-s7jgl" Feb 27 19:42:42 crc kubenswrapper[4981]: I0227 19:42:42.156277 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c006c9c-d6e0-46b9-af87-487c821d5593-utilities\") pod \"community-operators-s7jgl\" (UID: \"6c006c9c-d6e0-46b9-af87-487c821d5593\") " pod="openshift-marketplace/community-operators-s7jgl" Feb 27 19:42:42 crc kubenswrapper[4981]: I0227 19:42:42.156273 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c006c9c-d6e0-46b9-af87-487c821d5593-catalog-content\") pod \"community-operators-s7jgl\" (UID: \"6c006c9c-d6e0-46b9-af87-487c821d5593\") " pod="openshift-marketplace/community-operators-s7jgl" Feb 27 19:42:42 crc kubenswrapper[4981]: I0227 19:42:42.179700 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm9pk\" (UniqueName: \"kubernetes.io/projected/6c006c9c-d6e0-46b9-af87-487c821d5593-kube-api-access-vm9pk\") pod \"community-operators-s7jgl\" (UID: \"6c006c9c-d6e0-46b9-af87-487c821d5593\") " pod="openshift-marketplace/community-operators-s7jgl" Feb 27 19:42:42 crc kubenswrapper[4981]: I0227 19:42:42.243901 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s7jgl" Feb 27 19:42:42 crc kubenswrapper[4981]: I0227 19:42:42.501568 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9ltj"] Feb 27 19:42:42 crc kubenswrapper[4981]: I0227 19:42:42.696587 4981 generic.go:334] "Generic (PLEG): container finished" podID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" containerID="cce4ae88c06e10aeeef14e46b3a1a27bfbf3ddad8a433728025acf5ae404a148" exitCode=0 Feb 27 19:42:42 crc kubenswrapper[4981]: I0227 19:42:42.696649 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9ltj" event={"ID":"64c273de-1f65-4ec7-b2a0-c070e4d29ce6","Type":"ContainerDied","Data":"cce4ae88c06e10aeeef14e46b3a1a27bfbf3ddad8a433728025acf5ae404a148"} Feb 27 19:42:42 crc kubenswrapper[4981]: I0227 19:42:42.697246 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9ltj" event={"ID":"64c273de-1f65-4ec7-b2a0-c070e4d29ce6","Type":"ContainerStarted","Data":"2f20435a2fe0e0fa1b057439ae49cfd4b445966e298258338c71ea7243dfb063"} Feb 27 19:42:42 crc kubenswrapper[4981]: I0227 19:42:42.790856 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s7jgl"] Feb 27 19:42:43 crc kubenswrapper[4981]: E0227 19:42:43.502950 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 19:42:43 crc kubenswrapper[4981]: E0227 19:42:43.503120 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrpvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f9ltj_openshift-marketplace(64c273de-1f65-4ec7-b2a0-c070e4d29ce6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:42:43 crc kubenswrapper[4981]: E0227 19:42:43.504276 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:42:43 crc kubenswrapper[4981]: I0227 19:42:43.704636 4981 generic.go:334] "Generic (PLEG): container finished" podID="6c006c9c-d6e0-46b9-af87-487c821d5593" containerID="71f61c8583b64c6e7e66d4d393126898e9b566f600fe9dd289bcf70d6e364287" exitCode=0 Feb 27 19:42:43 crc kubenswrapper[4981]: I0227 19:42:43.704794 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7jgl" event={"ID":"6c006c9c-d6e0-46b9-af87-487c821d5593","Type":"ContainerDied","Data":"71f61c8583b64c6e7e66d4d393126898e9b566f600fe9dd289bcf70d6e364287"} Feb 27 19:42:43 crc kubenswrapper[4981]: I0227 19:42:43.705312 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7jgl" event={"ID":"6c006c9c-d6e0-46b9-af87-487c821d5593","Type":"ContainerStarted","Data":"3aba04b6e0d1f67d08ec9e39892e865e149e5a6d44e09ad7e5dd995a7283827b"} Feb 27 19:42:43 crc kubenswrapper[4981]: E0227 19:42:43.707176 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:42:44 crc kubenswrapper[4981]: E0227 19:42:44.341726 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:42:44 crc kubenswrapper[4981]: E0227 19:42:44.341877 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vm9pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-s7jgl_openshift-marketplace(6c006c9c-d6e0-46b9-af87-487c821d5593): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:42:44 crc kubenswrapper[4981]: E0227 19:42:44.343026 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:42:44 crc kubenswrapper[4981]: E0227 19:42:44.713200 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:42:50 crc kubenswrapper[4981]: I0227 19:42:50.248711 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:42:50 crc kubenswrapper[4981]: I0227 19:42:50.249291 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:42:52 crc kubenswrapper[4981]: E0227 19:42:52.631419 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:42:57 crc kubenswrapper[4981]: E0227 19:42:57.301434 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:42:57 crc kubenswrapper[4981]: E0227 19:42:57.301936 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vm9pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-s7jgl_openshift-marketplace(6c006c9c-d6e0-46b9-af87-487c821d5593): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:42:57 crc kubenswrapper[4981]: E0227 19:42:57.303115 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:42:57 crc kubenswrapper[4981]: E0227 19:42:57.379590 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 19:42:57 crc kubenswrapper[4981]: E0227 19:42:57.379739 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrpvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f9ltj_openshift-marketplace(64c273de-1f65-4ec7-b2a0-c070e4d29ce6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:42:57 crc kubenswrapper[4981]: E0227 19:42:57.381489 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:43:00 crc kubenswrapper[4981]: I0227 19:43:00.700169 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg_7b6fd45b-7ec1-45b0-b05d-a4e216ff5780/util/0.log" Feb 27 19:43:00 crc kubenswrapper[4981]: I0227 19:43:00.883417 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg_7b6fd45b-7ec1-45b0-b05d-a4e216ff5780/pull/0.log" Feb 27 19:43:00 crc kubenswrapper[4981]: I0227 19:43:00.898751 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg_7b6fd45b-7ec1-45b0-b05d-a4e216ff5780/util/0.log" Feb 27 19:43:00 crc kubenswrapper[4981]: I0227 19:43:00.899420 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg_7b6fd45b-7ec1-45b0-b05d-a4e216ff5780/pull/0.log" Feb 27 19:43:01 crc kubenswrapper[4981]: I0227 19:43:01.112681 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg_7b6fd45b-7ec1-45b0-b05d-a4e216ff5780/pull/0.log" Feb 27 19:43:01 crc kubenswrapper[4981]: I0227 19:43:01.145261 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg_7b6fd45b-7ec1-45b0-b05d-a4e216ff5780/util/0.log" Feb 27 19:43:01 crc kubenswrapper[4981]: I0227 19:43:01.187756 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4e313431da4c187e3bc9cfd2e1bb5f3f982d252344f1d1db33673208a644mdg_7b6fd45b-7ec1-45b0-b05d-a4e216ff5780/extract/0.log" Feb 27 19:43:01 crc kubenswrapper[4981]: I0227 19:43:01.617945 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5d87c9d997-sbs2q_12a33549-4f35-4fe1-851c-21e46a44dff6/manager/0.log" Feb 27 19:43:02 crc kubenswrapper[4981]: I0227 19:43:02.020439 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-64db6967f8-tjvzl_2cd1d521-194a-48fa-9412-a95ff0c2c598/manager/0.log" Feb 27 19:43:02 crc kubenswrapper[4981]: I0227 19:43:02.031462 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-cf99c678f-59gjc_2741c246-6bf8-411d-bdd7-29cb20588c0c/manager/0.log" Feb 27 19:43:02 crc kubenswrapper[4981]: I0227 19:43:02.253377 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-78bc7f9bd9-dhczx_a21f22e5-6cc2-43cc-890c-c9e42d8b12c5/manager/0.log" Feb 27 19:43:02 crc kubenswrapper[4981]: I0227 19:43:02.752281 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-9fqts_7a1c1676-014d-4de0-ab20-a951ad5bb7fe/manager/0.log" Feb 27 19:43:02 crc kubenswrapper[4981]: I0227 19:43:02.814266 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-f7fcc58b9-r27km_2042b3a5-c802-49c2-911b-b28eb19aecf5/manager/0.log" Feb 27 19:43:02 crc kubenswrapper[4981]: I0227 19:43:02.843184 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-545456dc4-6kvkg_5e144a53-3c1b-49db-9f08-d93ebe9fb576/manager/0.log" Feb 27 19:43:03 crc kubenswrapper[4981]: I0227 19:43:03.120261 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55ffd4876b-sblkk_96d76f06-213f-4b51-9dfa-7e77c5b97174/manager/0.log" Feb 27 19:43:03 crc kubenswrapper[4981]: I0227 19:43:03.189307 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-6fkss_67789b9f-79ac-4901-8acc-22a86fb876c4/manager/0.log" Feb 27 19:43:03 crc kubenswrapper[4981]: I0227 19:43:03.393522 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-556b8b874-k2kn8_e1c487e5-53af-41ef-8713-87d17ab9632d/manager/0.log" Feb 27 19:43:03 crc kubenswrapper[4981]: I0227 19:43:03.612228 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54688575f-6xfvc_1636f598-89d5-474c-85a9-69ea06f889de/manager/0.log" Feb 27 19:43:03 crc kubenswrapper[4981]: I0227 19:43:03.810386 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-74b6b5dc96-bq9tz_8120c80b-1df9-4534-b5c6-1ff42e7dd5f9/manager/0.log" Feb 27 19:43:03 crc kubenswrapper[4981]: I0227 19:43:03.915151 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5d86c7ddb7-jvb62_70ce2fb0-509d-4f5a-aff5-8b71df9f78c4/manager/0.log" Feb 27 19:43:04 crc kubenswrapper[4981]: I0227 19:43:04.117414 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9chxvph_87494b7e-5ff9-4bbf-b2b6-848c5d9269dc/manager/0.log" Feb 27 19:43:04 crc kubenswrapper[4981]: I0227 19:43:04.414485 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7698fb7476-ljffl_ded84d09-908f-47fd-b75b-25013113939f/operator/0.log" Feb 27 19:43:04 crc kubenswrapper[4981]: I0227 19:43:04.634546 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:43:04 crc kubenswrapper[4981]: I0227 19:43:04.690931 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-sbzgx_fd6567e9-7326-42da-8631-11a5b074f573/registry-server/0.log" Feb 27 19:43:04 crc kubenswrapper[4981]: I0227 19:43:04.873681 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-75684d597f-h9cbz_e9987372-8f11-4038-939a-75d1152e5667/manager/0.log" Feb 27 19:43:05 crc kubenswrapper[4981]: I0227 19:43:05.013746 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-648564c9fc-bvts5_7cbe4d2e-bd57-452d-b873-709e1de024e7/manager/0.log" Feb 27 19:43:05 crc kubenswrapper[4981]: I0227 19:43:05.116154 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-wlwb7_e210121e-ac51-4667-9bbc-7080ed583a49/operator/0.log" Feb 27 19:43:05 crc kubenswrapper[4981]: I0227 19:43:05.315344 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9b9ff9f4d-xzqdb_2691a6d3-7eae-4d8c-b2e4-2157b87f0766/manager/0.log" Feb 27 19:43:05 crc kubenswrapper[4981]: I0227 19:43:05.518331 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5fdb694969-rb9hs_93a2ac79-08d9-4559-a954-bdf0b2eb4dab/manager/0.log" Feb 27 19:43:05 crc kubenswrapper[4981]: E0227 19:43:05.579949 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:43:05 crc kubenswrapper[4981]: E0227 19:43:05.580107 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:43:05 crc kubenswrapper[4981]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:43:05 crc kubenswrapper[4981]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qphf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537022-gk6n2_openshift-infra(e853497d-5551-44e1-82d2-9915151f5e46): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:43:05 crc kubenswrapper[4981]: > logger="UnhandledError" Feb 27 19:43:05 crc kubenswrapper[4981]: E0227 19:43:05.581155 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:43:05 crc kubenswrapper[4981]: I0227 19:43:05.612194 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-55b5ff4dbb-w46z2_f28d8002-92dc-43b8-a2d5-858fd350c18c/manager/0.log" Feb 27 19:43:05 crc kubenswrapper[4981]: I0227 19:43:05.741697 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-67c78cbb8b-dmjqm_d5f991dd-8062-43a8-8725-7a60c5a27a14/manager/0.log" Feb 27 19:43:05 crc kubenswrapper[4981]: I0227 19:43:05.747620 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-9pvd5_1ac62c06-bfa2-435e-a497-7d0ce40f0fd4/manager/0.log" Feb 27 19:43:07 crc kubenswrapper[4981]: I0227 19:43:07.730152 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6db6876945-jtd4l_fdf56547-b3d3-4481-acea-493c4ea4b2d9/manager/0.log" Feb 27 19:43:08 crc kubenswrapper[4981]: E0227 19:43:08.630842 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:43:10 crc kubenswrapper[4981]: E0227 19:43:10.630258 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:43:16 crc kubenswrapper[4981]: E0227 19:43:16.630724 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:43:20 crc kubenswrapper[4981]: I0227 19:43:20.248897 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:43:20 crc kubenswrapper[4981]: I0227 19:43:20.249247 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:43:21 crc kubenswrapper[4981]: E0227 19:43:21.420709 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:43:21 crc kubenswrapper[4981]: E0227 19:43:21.421139 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vm9pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-s7jgl_openshift-marketplace(6c006c9c-d6e0-46b9-af87-487c821d5593): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:43:21 crc kubenswrapper[4981]: E0227 19:43:21.422299 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:43:24 crc kubenswrapper[4981]: E0227 19:43:24.346131 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 19:43:24 crc kubenswrapper[4981]: E0227 19:43:24.346620 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrpvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f9ltj_openshift-marketplace(64c273de-1f65-4ec7-b2a0-c070e4d29ce6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:43:24 crc kubenswrapper[4981]: E0227 19:43:24.347859 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:43:24 crc kubenswrapper[4981]: I0227 19:43:24.619831 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-bkfrt_d5b69559-dbbb-451e-8a89-0d8c61a363f3/control-plane-machine-set-operator/0.log" Feb 27 19:43:24 crc kubenswrapper[4981]: I0227 19:43:24.793535 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jdhvt_1107c99b-98a7-4103-9e6c-dde234daacaf/kube-rbac-proxy/0.log" Feb 27 19:43:24 crc kubenswrapper[4981]: I0227 19:43:24.813425 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jdhvt_1107c99b-98a7-4103-9e6c-dde234daacaf/machine-api-operator/0.log" Feb 27 19:43:31 crc kubenswrapper[4981]: E0227 19:43:31.635247 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:43:33 crc kubenswrapper[4981]: E0227 19:43:33.630748 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:43:33 crc kubenswrapper[4981]: I0227 19:43:33.803502 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sw9dj"] Feb 27 19:43:33 crc kubenswrapper[4981]: I0227 19:43:33.805475 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sw9dj" Feb 27 19:43:33 crc kubenswrapper[4981]: I0227 19:43:33.819443 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sw9dj"] Feb 27 19:43:33 crc kubenswrapper[4981]: I0227 19:43:33.944811 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f1a7e95-fd5e-440b-8df5-aebf383a2d8a-catalog-content\") pod \"redhat-marketplace-sw9dj\" (UID: \"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a\") " pod="openshift-marketplace/redhat-marketplace-sw9dj" Feb 27 19:43:33 crc kubenswrapper[4981]: I0227 19:43:33.945153 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdlsz\" (UniqueName: \"kubernetes.io/projected/9f1a7e95-fd5e-440b-8df5-aebf383a2d8a-kube-api-access-qdlsz\") pod \"redhat-marketplace-sw9dj\" (UID: \"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a\") " pod="openshift-marketplace/redhat-marketplace-sw9dj" Feb 27 19:43:33 crc kubenswrapper[4981]: I0227 19:43:33.945193 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f1a7e95-fd5e-440b-8df5-aebf383a2d8a-utilities\") pod \"redhat-marketplace-sw9dj\" (UID: \"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a\") " pod="openshift-marketplace/redhat-marketplace-sw9dj" Feb 27 19:43:34 crc kubenswrapper[4981]: I0227 19:43:34.046515 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f1a7e95-fd5e-440b-8df5-aebf383a2d8a-utilities\") pod \"redhat-marketplace-sw9dj\" (UID: \"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a\") " pod="openshift-marketplace/redhat-marketplace-sw9dj" Feb 27 19:43:34 crc kubenswrapper[4981]: I0227 19:43:34.046647 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f1a7e95-fd5e-440b-8df5-aebf383a2d8a-catalog-content\") pod \"redhat-marketplace-sw9dj\" (UID: \"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a\") " pod="openshift-marketplace/redhat-marketplace-sw9dj" Feb 27 19:43:34 crc kubenswrapper[4981]: I0227 19:43:34.046676 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdlsz\" (UniqueName: \"kubernetes.io/projected/9f1a7e95-fd5e-440b-8df5-aebf383a2d8a-kube-api-access-qdlsz\") pod \"redhat-marketplace-sw9dj\" (UID: \"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a\") " pod="openshift-marketplace/redhat-marketplace-sw9dj" Feb 27 19:43:34 crc kubenswrapper[4981]: I0227 19:43:34.047141 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f1a7e95-fd5e-440b-8df5-aebf383a2d8a-utilities\") pod \"redhat-marketplace-sw9dj\" (UID: \"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a\") " pod="openshift-marketplace/redhat-marketplace-sw9dj" Feb 27 19:43:34 crc kubenswrapper[4981]: I0227 19:43:34.047162 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f1a7e95-fd5e-440b-8df5-aebf383a2d8a-catalog-content\") pod \"redhat-marketplace-sw9dj\" (UID: \"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a\") " pod="openshift-marketplace/redhat-marketplace-sw9dj" Feb 27 19:43:34 crc kubenswrapper[4981]: I0227 19:43:34.075971 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdlsz\" (UniqueName: \"kubernetes.io/projected/9f1a7e95-fd5e-440b-8df5-aebf383a2d8a-kube-api-access-qdlsz\") pod \"redhat-marketplace-sw9dj\" (UID: \"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a\") " pod="openshift-marketplace/redhat-marketplace-sw9dj" Feb 27 19:43:34 crc kubenswrapper[4981]: I0227 19:43:34.124983 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sw9dj" Feb 27 19:43:34 crc kubenswrapper[4981]: I0227 19:43:34.536205 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sw9dj"] Feb 27 19:43:34 crc kubenswrapper[4981]: W0227 19:43:34.537537 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f1a7e95_fd5e_440b_8df5_aebf383a2d8a.slice/crio-6011c1ea74d4a4230bef8a1555cacee48cdda42194ced988e99c8294cf657a5c WatchSource:0}: Error finding container 6011c1ea74d4a4230bef8a1555cacee48cdda42194ced988e99c8294cf657a5c: Status 404 returned error can't find the container with id 6011c1ea74d4a4230bef8a1555cacee48cdda42194ced988e99c8294cf657a5c Feb 27 19:43:35 crc kubenswrapper[4981]: I0227 19:43:35.016702 4981 generic.go:334] "Generic (PLEG): container finished" podID="9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" containerID="0bca60463545ce4225070a9d6cfecb3b4d50008b657c183ecf279984ad76953a" exitCode=0 Feb 27 19:43:35 crc kubenswrapper[4981]: I0227 19:43:35.016748 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw9dj" event={"ID":"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a","Type":"ContainerDied","Data":"0bca60463545ce4225070a9d6cfecb3b4d50008b657c183ecf279984ad76953a"} Feb 27 19:43:35 crc kubenswrapper[4981]: I0227 19:43:35.016775 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw9dj" event={"ID":"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a","Type":"ContainerStarted","Data":"6011c1ea74d4a4230bef8a1555cacee48cdda42194ced988e99c8294cf657a5c"} Feb 27 19:43:35 crc kubenswrapper[4981]: E0227 19:43:35.535536 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 19:43:35 crc kubenswrapper[4981]: E0227 19:43:35.535992 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdlsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sw9dj_openshift-marketplace(9f1a7e95-fd5e-440b-8df5-aebf383a2d8a): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:43:35 crc kubenswrapper[4981]: E0227 19:43:35.537202 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-sw9dj" podUID="9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" Feb 27 19:43:35 crc kubenswrapper[4981]: E0227 19:43:35.629704 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:43:36 crc kubenswrapper[4981]: E0227 19:43:36.041898 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sw9dj" podUID="9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" Feb 27 19:43:36 crc kubenswrapper[4981]: I0227 19:43:36.144031 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-wq6zs_344136f5-bd6a-4fb8-8f50-b049e04956ab/cert-manager-controller/0.log" Feb 27 19:43:36 crc kubenswrapper[4981]: I0227 19:43:36.232643 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-vkhnk_757ddca8-db4b-483a-8f0f-f649431f54da/cert-manager-cainjector/0.log" Feb 27 19:43:36 crc kubenswrapper[4981]: I0227 19:43:36.300662 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-k4fph_8d477e9b-09c2-464a-8dfa-c42dc9a8a6e4/cert-manager-webhook/0.log" Feb 27 19:43:42 crc kubenswrapper[4981]: E0227 19:43:42.630798 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:43:47 crc kubenswrapper[4981]: I0227 19:43:47.291422 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-4fnww_bacb7ae1-b5de-400a-9182-850c94bff2ac/nmstate-console-plugin/0.log" Feb 27 19:43:47 crc kubenswrapper[4981]: I0227 19:43:47.472447 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5g6ww_252e90b6-bbc9-40ed-a7f6-df2cd8bec420/nmstate-handler/0.log" Feb 27 19:43:47 crc kubenswrapper[4981]: I0227 19:43:47.605922 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-s77xl_59ad1269-77a3-4b47-833b-60da24bbe283/kube-rbac-proxy/0.log" Feb 27 19:43:47 crc kubenswrapper[4981]: E0227 19:43:47.630309 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:43:47 crc kubenswrapper[4981]: I0227 19:43:47.654505 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-s77xl_59ad1269-77a3-4b47-833b-60da24bbe283/nmstate-metrics/0.log" Feb 27 19:43:47 crc kubenswrapper[4981]: I0227 19:43:47.690537 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-kxr86_8b932696-4ff1-477c-a0d4-710f47224107/nmstate-operator/0.log" Feb 27 19:43:47 crc kubenswrapper[4981]: I0227 19:43:47.820002 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-hqtml_fecb7b86-12e2-42ad-af13-8d5b3cbcae05/nmstate-webhook/0.log" Feb 27 19:43:48 crc kubenswrapper[4981]: E0227 19:43:48.630205 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:43:50 crc kubenswrapper[4981]: I0227 19:43:50.249117 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:43:50 crc kubenswrapper[4981]: I0227 19:43:50.249458 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:43:50 crc kubenswrapper[4981]: I0227 19:43:50.249515 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 19:43:50 crc kubenswrapper[4981]: I0227 19:43:50.250321 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e7587b85d64eef60150f43bd74b50c7e64a21cc95eaa522a2a9bc99615746d6"} pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:43:50 crc kubenswrapper[4981]: I0227 19:43:50.250391 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" containerID="cri-o://7e7587b85d64eef60150f43bd74b50c7e64a21cc95eaa522a2a9bc99615746d6" gracePeriod=600 Feb 27 19:43:51 crc kubenswrapper[4981]: I0227 19:43:51.132658 4981 generic.go:334] "Generic (PLEG): container finished" podID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerID="7e7587b85d64eef60150f43bd74b50c7e64a21cc95eaa522a2a9bc99615746d6" exitCode=0 Feb 27 19:43:51 crc kubenswrapper[4981]: I0227 19:43:51.133323 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerDied","Data":"7e7587b85d64eef60150f43bd74b50c7e64a21cc95eaa522a2a9bc99615746d6"} Feb 27 19:43:51 crc kubenswrapper[4981]: I0227 19:43:51.133365 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerStarted","Data":"810a98630d243f775d15a5f7a0dbc4550e506380d2e30e6da6d41ffca9dc5d6d"} Feb 27 19:43:51 crc kubenswrapper[4981]: I0227 19:43:51.133404 4981 scope.go:117] "RemoveContainer" containerID="ee1c6e473e8d3e0329d9c9699c5eb5f093fb8148647ec6574338e0e81f15cc78" Feb 27 19:43:51 crc kubenswrapper[4981]: E0227 19:43:51.400488 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 19:43:51 crc kubenswrapper[4981]: E0227 19:43:51.400669 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdlsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sw9dj_openshift-marketplace(9f1a7e95-fd5e-440b-8df5-aebf383a2d8a): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:43:51 crc kubenswrapper[4981]: E0227 19:43:51.401874 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-sw9dj" podUID="9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" Feb 27 19:43:55 crc kubenswrapper[4981]: E0227 19:43:55.465959 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:43:55 crc kubenswrapper[4981]: E0227 19:43:55.466653 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:43:55 crc kubenswrapper[4981]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:43:55 crc kubenswrapper[4981]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qphf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537022-gk6n2_openshift-infra(e853497d-5551-44e1-82d2-9915151f5e46): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:43:55 crc kubenswrapper[4981]: > logger="UnhandledError" Feb 27 19:43:55 crc kubenswrapper[4981]: E0227 19:43:55.467927 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:43:59 crc kubenswrapper[4981]: E0227 19:43:59.631324 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:44:00 crc kubenswrapper[4981]: I0227 19:44:00.140965 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537024-zfz4c"] Feb 27 19:44:00 crc kubenswrapper[4981]: I0227 19:44:00.142431 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537024-zfz4c" Feb 27 19:44:00 crc kubenswrapper[4981]: I0227 19:44:00.149892 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537024-zfz4c"] Feb 27 19:44:00 crc kubenswrapper[4981]: I0227 19:44:00.304906 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fz9k\" (UniqueName: \"kubernetes.io/projected/916bf2c9-1243-489a-a087-90f6cfa99f40-kube-api-access-4fz9k\") pod \"auto-csr-approver-29537024-zfz4c\" (UID: \"916bf2c9-1243-489a-a087-90f6cfa99f40\") " pod="openshift-infra/auto-csr-approver-29537024-zfz4c" Feb 27 19:44:00 crc kubenswrapper[4981]: I0227 19:44:00.406529 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fz9k\" (UniqueName: \"kubernetes.io/projected/916bf2c9-1243-489a-a087-90f6cfa99f40-kube-api-access-4fz9k\") pod \"auto-csr-approver-29537024-zfz4c\" (UID: \"916bf2c9-1243-489a-a087-90f6cfa99f40\") " pod="openshift-infra/auto-csr-approver-29537024-zfz4c" Feb 27 19:44:00 crc kubenswrapper[4981]: I0227 19:44:00.442840 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fz9k\" (UniqueName: \"kubernetes.io/projected/916bf2c9-1243-489a-a087-90f6cfa99f40-kube-api-access-4fz9k\") pod \"auto-csr-approver-29537024-zfz4c\" (UID: \"916bf2c9-1243-489a-a087-90f6cfa99f40\") " pod="openshift-infra/auto-csr-approver-29537024-zfz4c" Feb 27 19:44:00 crc kubenswrapper[4981]: I0227 19:44:00.468215 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537024-zfz4c" Feb 27 19:44:00 crc kubenswrapper[4981]: E0227 19:44:00.633742 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:44:00 crc kubenswrapper[4981]: I0227 19:44:00.787462 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537024-zfz4c"] Feb 27 19:44:01 crc kubenswrapper[4981]: I0227 19:44:01.210429 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537024-zfz4c" event={"ID":"916bf2c9-1243-489a-a087-90f6cfa99f40","Type":"ContainerStarted","Data":"d0cc480591c2056e6dc6ecc439b7bee8b5fcf4916af1519f43e16178c18d4e69"} Feb 27 19:44:01 crc kubenswrapper[4981]: E0227 19:44:01.734883 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:44:01 crc kubenswrapper[4981]: E0227 19:44:01.735017 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:44:01 crc kubenswrapper[4981]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:44:01 crc kubenswrapper[4981]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4fz9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537024-zfz4c_openshift-infra(916bf2c9-1243-489a-a087-90f6cfa99f40): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:44:01 crc kubenswrapper[4981]: > logger="UnhandledError" Feb 27 19:44:01 crc kubenswrapper[4981]: E0227 19:44:01.736326 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537024-zfz4c" podUID="916bf2c9-1243-489a-a087-90f6cfa99f40" Feb 27 19:44:02 crc kubenswrapper[4981]: E0227 19:44:02.221165 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537024-zfz4c" podUID="916bf2c9-1243-489a-a087-90f6cfa99f40" Feb 27 19:44:04 crc kubenswrapper[4981]: E0227 19:44:04.632974 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sw9dj" podUID="9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" Feb 27 19:44:07 crc kubenswrapper[4981]: E0227 19:44:07.631177 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:44:11 crc kubenswrapper[4981]: E0227 19:44:11.199050 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:44:11 crc kubenswrapper[4981]: E0227 19:44:11.199507 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vm9pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-s7jgl_openshift-marketplace(6c006c9c-d6e0-46b9-af87-487c821d5593): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:44:11 crc kubenswrapper[4981]: E0227 19:44:11.200696 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:44:12 crc kubenswrapper[4981]: I0227 19:44:12.169086 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-rwrfs_373faaaa-18eb-4e83-80f8-7828aea58a3a/kube-rbac-proxy/0.log" Feb 27 19:44:12 crc kubenswrapper[4981]: E0227 19:44:12.231276 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 19:44:12 crc kubenswrapper[4981]: E0227 19:44:12.231430 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrpvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f9ltj_openshift-marketplace(64c273de-1f65-4ec7-b2a0-c070e4d29ce6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:44:12 crc kubenswrapper[4981]: E0227 19:44:12.232519 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:44:12 crc kubenswrapper[4981]: I0227 19:44:12.445374 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jfmbn_24537f79-2aa5-4ba1-afc0-e91183569040/cp-frr-files/0.log" Feb 27 19:44:12 crc kubenswrapper[4981]: I0227 19:44:12.546595 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-rwrfs_373faaaa-18eb-4e83-80f8-7828aea58a3a/controller/0.log" Feb 27 19:44:12 crc kubenswrapper[4981]: I0227 19:44:12.630521 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jfmbn_24537f79-2aa5-4ba1-afc0-e91183569040/cp-frr-files/0.log" Feb 27 19:44:12 crc kubenswrapper[4981]: I0227 19:44:12.632185 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jfmbn_24537f79-2aa5-4ba1-afc0-e91183569040/cp-metrics/0.log" Feb 27 19:44:12 crc kubenswrapper[4981]: I0227 19:44:12.634948 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jfmbn_24537f79-2aa5-4ba1-afc0-e91183569040/cp-reloader/0.log" Feb 27 19:44:12 crc kubenswrapper[4981]: I0227 19:44:12.739251 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jfmbn_24537f79-2aa5-4ba1-afc0-e91183569040/cp-reloader/0.log" Feb 27 19:44:12 crc kubenswrapper[4981]: I0227 19:44:12.913870 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jfmbn_24537f79-2aa5-4ba1-afc0-e91183569040/cp-frr-files/0.log" Feb 27 19:44:12 crc kubenswrapper[4981]: I0227 19:44:12.919134 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jfmbn_24537f79-2aa5-4ba1-afc0-e91183569040/cp-metrics/0.log" Feb 27 19:44:12 crc kubenswrapper[4981]: I0227 19:44:12.931769 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jfmbn_24537f79-2aa5-4ba1-afc0-e91183569040/cp-reloader/0.log" Feb 27 19:44:12 crc kubenswrapper[4981]: I0227 19:44:12.976290 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jfmbn_24537f79-2aa5-4ba1-afc0-e91183569040/cp-metrics/0.log" Feb 27 19:44:13 crc kubenswrapper[4981]: I0227 19:44:13.153628 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jfmbn_24537f79-2aa5-4ba1-afc0-e91183569040/cp-reloader/0.log" Feb 27 19:44:13 crc kubenswrapper[4981]: I0227 19:44:13.177417 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jfmbn_24537f79-2aa5-4ba1-afc0-e91183569040/controller/0.log" Feb 27 19:44:13 crc kubenswrapper[4981]: I0227 19:44:13.183901 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jfmbn_24537f79-2aa5-4ba1-afc0-e91183569040/cp-metrics/0.log" Feb 27 19:44:13 crc kubenswrapper[4981]: I0227 19:44:13.189192 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jfmbn_24537f79-2aa5-4ba1-afc0-e91183569040/cp-frr-files/0.log" Feb 27 19:44:13 crc kubenswrapper[4981]: I0227 19:44:13.398784 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jfmbn_24537f79-2aa5-4ba1-afc0-e91183569040/kube-rbac-proxy-frr/0.log" Feb 27 19:44:13 crc kubenswrapper[4981]: I0227 19:44:13.398935 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jfmbn_24537f79-2aa5-4ba1-afc0-e91183569040/kube-rbac-proxy/0.log" Feb 27 19:44:13 crc kubenswrapper[4981]: I0227 19:44:13.400021 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jfmbn_24537f79-2aa5-4ba1-afc0-e91183569040/frr-metrics/0.log" Feb 27 19:44:13 crc kubenswrapper[4981]: I0227 19:44:13.587075 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-mp522_f2c01995-bfbd-4e83-bc8e-e476d7d32a4b/frr-k8s-webhook-server/0.log" Feb 27 19:44:13 crc kubenswrapper[4981]: I0227 19:44:13.645496 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jfmbn_24537f79-2aa5-4ba1-afc0-e91183569040/reloader/0.log" Feb 27 19:44:13 crc kubenswrapper[4981]: I0227 19:44:13.903123 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-f8c457c64-w9f5p_8eb3c962-8cf8-4a0c-ad4f-a2f5c8b6f48c/manager/0.log" Feb 27 19:44:13 crc kubenswrapper[4981]: I0227 19:44:13.978549 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6dc7b56ff5-n8fqf_21150d0e-51a3-4f9a-beb7-d4511f4680da/webhook-server/0.log" Feb 27 19:44:14 crc kubenswrapper[4981]: I0227 19:44:14.115708 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kzbf5_a5fc2773-7650-4e03-9c68-6cbdab555ae0/kube-rbac-proxy/0.log" Feb 27 19:44:14 crc kubenswrapper[4981]: E0227 19:44:14.592319 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:44:14 crc kubenswrapper[4981]: E0227 19:44:14.592471 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:44:14 crc kubenswrapper[4981]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:44:14 crc kubenswrapper[4981]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4fz9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537024-zfz4c_openshift-infra(916bf2c9-1243-489a-a087-90f6cfa99f40): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:44:14 crc kubenswrapper[4981]: > logger="UnhandledError" Feb 27 19:44:14 crc kubenswrapper[4981]: E0227 19:44:14.593972 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537024-zfz4c" podUID="916bf2c9-1243-489a-a087-90f6cfa99f40" Feb 27 19:44:14 crc kubenswrapper[4981]: I0227 19:44:14.706530 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kzbf5_a5fc2773-7650-4e03-9c68-6cbdab555ae0/speaker/0.log" Feb 27 19:44:14 crc kubenswrapper[4981]: I0227 19:44:14.796513 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jfmbn_24537f79-2aa5-4ba1-afc0-e91183569040/frr/0.log" Feb 27 19:44:16 crc kubenswrapper[4981]: E0227 19:44:16.342726 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 19:44:16 crc kubenswrapper[4981]: E0227 19:44:16.342911 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdlsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sw9dj_openshift-marketplace(9f1a7e95-fd5e-440b-8df5-aebf383a2d8a): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:44:16 crc kubenswrapper[4981]: E0227 19:44:16.344157 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-sw9dj" podUID="9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" Feb 27 19:44:18 crc kubenswrapper[4981]: E0227 19:44:18.630448 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:44:23 crc kubenswrapper[4981]: E0227 19:44:23.630871 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:44:26 crc kubenswrapper[4981]: E0227 19:44:26.630232 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:44:27 crc kubenswrapper[4981]: I0227 19:44:27.995659 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp_3a16a4f3-0450-40f6-b7b9-26ce12441e3b/util/0.log" Feb 27 19:44:28 crc kubenswrapper[4981]: I0227 19:44:28.120490 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp_3a16a4f3-0450-40f6-b7b9-26ce12441e3b/util/0.log" Feb 27 19:44:28 crc kubenswrapper[4981]: I0227 19:44:28.152347 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp_3a16a4f3-0450-40f6-b7b9-26ce12441e3b/pull/0.log" Feb 27 19:44:28 crc kubenswrapper[4981]: I0227 19:44:28.157532 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp_3a16a4f3-0450-40f6-b7b9-26ce12441e3b/pull/0.log" Feb 27 19:44:28 crc kubenswrapper[4981]: I0227 19:44:28.367981 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp_3a16a4f3-0450-40f6-b7b9-26ce12441e3b/util/0.log" Feb 27 19:44:28 crc kubenswrapper[4981]: I0227 19:44:28.407989 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp_3a16a4f3-0450-40f6-b7b9-26ce12441e3b/pull/0.log" Feb 27 19:44:28 crc kubenswrapper[4981]: I0227 19:44:28.426323 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82hkvtp_3a16a4f3-0450-40f6-b7b9-26ce12441e3b/extract/0.log" Feb 27 19:44:28 crc kubenswrapper[4981]: I0227 19:44:28.529495 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj_dcffc4a2-219d-4e33-afe1-c8eab8b67ae4/util/0.log" Feb 27 19:44:28 crc kubenswrapper[4981]: E0227 19:44:28.629489 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537024-zfz4c" podUID="916bf2c9-1243-489a-a087-90f6cfa99f40" Feb 27 19:44:28 crc kubenswrapper[4981]: I0227 19:44:28.720399 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj_dcffc4a2-219d-4e33-afe1-c8eab8b67ae4/util/0.log" Feb 27 19:44:28 crc kubenswrapper[4981]: I0227 19:44:28.744798 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj_dcffc4a2-219d-4e33-afe1-c8eab8b67ae4/pull/0.log" Feb 27 19:44:28 crc kubenswrapper[4981]: I0227 19:44:28.789550 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj_dcffc4a2-219d-4e33-afe1-c8eab8b67ae4/pull/0.log" Feb 27 19:44:28 crc kubenswrapper[4981]: I0227 19:44:28.908796 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj_dcffc4a2-219d-4e33-afe1-c8eab8b67ae4/util/0.log" Feb 27 19:44:28 crc kubenswrapper[4981]: I0227 19:44:28.929729 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj_dcffc4a2-219d-4e33-afe1-c8eab8b67ae4/pull/0.log" Feb 27 19:44:28 crc kubenswrapper[4981]: I0227 19:44:28.948554 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5kzrbj_dcffc4a2-219d-4e33-afe1-c8eab8b67ae4/extract/0.log" Feb 27 19:44:29 crc kubenswrapper[4981]: I0227 19:44:29.084379 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2krnf_ac79f530-9dad-40da-9fcb-d82a30bd8b57/extract-utilities/0.log" Feb 27 19:44:29 crc kubenswrapper[4981]: I0227 19:44:29.268245 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2krnf_ac79f530-9dad-40da-9fcb-d82a30bd8b57/extract-content/0.log" Feb 27 19:44:29 crc kubenswrapper[4981]: I0227 19:44:29.281205 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2krnf_ac79f530-9dad-40da-9fcb-d82a30bd8b57/extract-utilities/0.log" Feb 27 19:44:29 crc kubenswrapper[4981]: I0227 19:44:29.282363 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2krnf_ac79f530-9dad-40da-9fcb-d82a30bd8b57/extract-content/0.log" Feb 27 19:44:29 crc kubenswrapper[4981]: I0227 19:44:29.449320 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2krnf_ac79f530-9dad-40da-9fcb-d82a30bd8b57/extract-content/0.log" Feb 27 19:44:29 crc kubenswrapper[4981]: I0227 19:44:29.460036 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2krnf_ac79f530-9dad-40da-9fcb-d82a30bd8b57/extract-utilities/0.log" Feb 27 19:44:29 crc kubenswrapper[4981]: I0227 19:44:29.677131 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s7jgl_6c006c9c-d6e0-46b9-af87-487c821d5593/extract-utilities/0.log" Feb 27 19:44:29 crc kubenswrapper[4981]: I0227 19:44:29.924110 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s7jgl_6c006c9c-d6e0-46b9-af87-487c821d5593/extract-utilities/0.log" Feb 27 19:44:30 crc kubenswrapper[4981]: I0227 19:44:30.107360 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s7jgl_6c006c9c-d6e0-46b9-af87-487c821d5593/extract-utilities/0.log" Feb 27 19:44:30 crc kubenswrapper[4981]: I0227 19:44:30.131908 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2krnf_ac79f530-9dad-40da-9fcb-d82a30bd8b57/registry-server/0.log" Feb 27 19:44:30 crc kubenswrapper[4981]: I0227 19:44:30.292360 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wc2tk_9f98ae2b-e26f-4877-870b-93c73484de63/extract-utilities/0.log" Feb 27 19:44:30 crc kubenswrapper[4981]: I0227 19:44:30.464716 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wc2tk_9f98ae2b-e26f-4877-870b-93c73484de63/extract-content/0.log" Feb 27 19:44:30 crc kubenswrapper[4981]: I0227 19:44:30.480267 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wc2tk_9f98ae2b-e26f-4877-870b-93c73484de63/extract-utilities/0.log" Feb 27 19:44:30 crc kubenswrapper[4981]: I0227 19:44:30.481945 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wc2tk_9f98ae2b-e26f-4877-870b-93c73484de63/extract-content/0.log" Feb 27 19:44:30 crc kubenswrapper[4981]: E0227 19:44:30.630950 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sw9dj" podUID="9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" Feb 27 19:44:30 crc kubenswrapper[4981]: I0227 19:44:30.669635 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wc2tk_9f98ae2b-e26f-4877-870b-93c73484de63/extract-utilities/0.log" Feb 27 19:44:30 crc kubenswrapper[4981]: I0227 19:44:30.708600 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wc2tk_9f98ae2b-e26f-4877-870b-93c73484de63/extract-content/0.log" Feb 27 19:44:30 crc kubenswrapper[4981]: I0227 19:44:30.831679 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4_bc605a22-0ad3-4fee-9082-27d102a048f7/util/0.log" Feb 27 19:44:31 crc kubenswrapper[4981]: I0227 19:44:31.068954 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4_bc605a22-0ad3-4fee-9082-27d102a048f7/pull/0.log" Feb 27 19:44:31 crc kubenswrapper[4981]: I0227 19:44:31.144230 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4_bc605a22-0ad3-4fee-9082-27d102a048f7/pull/0.log" Feb 27 19:44:31 crc kubenswrapper[4981]: I0227 19:44:31.246485 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4_bc605a22-0ad3-4fee-9082-27d102a048f7/util/0.log" Feb 27 19:44:31 crc kubenswrapper[4981]: I0227 19:44:31.372544 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wc2tk_9f98ae2b-e26f-4877-870b-93c73484de63/registry-server/0.log" Feb 27 19:44:31 crc kubenswrapper[4981]: I0227 19:44:31.391647 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4_bc605a22-0ad3-4fee-9082-27d102a048f7/util/0.log" Feb 27 19:44:31 crc kubenswrapper[4981]: I0227 19:44:31.412238 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4_bc605a22-0ad3-4fee-9082-27d102a048f7/pull/0.log" Feb 27 19:44:31 crc kubenswrapper[4981]: I0227 19:44:31.429036 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4749b4_bc605a22-0ad3-4fee-9082-27d102a048f7/extract/0.log" Feb 27 19:44:31 crc kubenswrapper[4981]: I0227 19:44:31.564791 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6cjfh_5fc084d6-4cd6-4556-a0ba-80b909119353/marketplace-operator/0.log" Feb 27 19:44:31 crc kubenswrapper[4981]: E0227 19:44:31.633329 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:44:31 crc kubenswrapper[4981]: I0227 19:44:31.749937 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rb599_2557a2d1-c08e-4a0a-b04e-a05aacf26465/extract-utilities/0.log" Feb 27 19:44:31 crc kubenswrapper[4981]: I0227 19:44:31.879630 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rb599_2557a2d1-c08e-4a0a-b04e-a05aacf26465/extract-content/0.log" Feb 27 19:44:31 crc kubenswrapper[4981]: I0227 19:44:31.885895 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rb599_2557a2d1-c08e-4a0a-b04e-a05aacf26465/extract-utilities/0.log" Feb 27 19:44:31 crc kubenswrapper[4981]: I0227 19:44:31.924319 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rb599_2557a2d1-c08e-4a0a-b04e-a05aacf26465/extract-content/0.log" Feb 27 19:44:32 crc kubenswrapper[4981]: I0227 19:44:32.091748 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rb599_2557a2d1-c08e-4a0a-b04e-a05aacf26465/extract-utilities/0.log" Feb 27 19:44:32 crc kubenswrapper[4981]: I0227 19:44:32.116540 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sw9dj_9f1a7e95-fd5e-440b-8df5-aebf383a2d8a/extract-utilities/0.log" Feb 27 19:44:32 crc kubenswrapper[4981]: I0227 19:44:32.122835 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rb599_2557a2d1-c08e-4a0a-b04e-a05aacf26465/extract-content/0.log" Feb 27 19:44:32 crc kubenswrapper[4981]: I0227 19:44:32.207182 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rb599_2557a2d1-c08e-4a0a-b04e-a05aacf26465/registry-server/0.log" Feb 27 19:44:32 crc kubenswrapper[4981]: I0227 19:44:32.297891 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sw9dj_9f1a7e95-fd5e-440b-8df5-aebf383a2d8a/extract-utilities/0.log" Feb 27 19:44:32 crc kubenswrapper[4981]: I0227 19:44:32.482079 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sw9dj_9f1a7e95-fd5e-440b-8df5-aebf383a2d8a/extract-utilities/0.log" Feb 27 19:44:32 crc kubenswrapper[4981]: I0227 19:44:32.521403 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-djqnq_80d67677-93a5-4633-88fc-dde5d45e9756/extract-utilities/0.log" Feb 27 19:44:32 crc kubenswrapper[4981]: I0227 19:44:32.679808 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-djqnq_80d67677-93a5-4633-88fc-dde5d45e9756/extract-utilities/0.log" Feb 27 19:44:32 crc kubenswrapper[4981]: I0227 19:44:32.693607 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-djqnq_80d67677-93a5-4633-88fc-dde5d45e9756/extract-content/0.log" Feb 27 19:44:32 crc kubenswrapper[4981]: I0227 19:44:32.703412 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-djqnq_80d67677-93a5-4633-88fc-dde5d45e9756/extract-content/0.log" Feb 27 19:44:32 crc kubenswrapper[4981]: I0227 19:44:32.911906 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-djqnq_80d67677-93a5-4633-88fc-dde5d45e9756/extract-utilities/0.log" Feb 27 19:44:32 crc kubenswrapper[4981]: I0227 19:44:32.913013 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-djqnq_80d67677-93a5-4633-88fc-dde5d45e9756/extract-content/0.log" Feb 27 19:44:32 crc kubenswrapper[4981]: I0227 19:44:32.959474 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f9ltj_64c273de-1f65-4ec7-b2a0-c070e4d29ce6/extract-utilities/0.log" Feb 27 19:44:33 crc kubenswrapper[4981]: I0227 19:44:33.187883 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f9ltj_64c273de-1f65-4ec7-b2a0-c070e4d29ce6/extract-utilities/0.log" Feb 27 19:44:33 crc kubenswrapper[4981]: I0227 19:44:33.302006 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-djqnq_80d67677-93a5-4633-88fc-dde5d45e9756/registry-server/0.log" Feb 27 19:44:33 crc kubenswrapper[4981]: I0227 19:44:33.402412 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f9ltj_64c273de-1f65-4ec7-b2a0-c070e4d29ce6/extract-utilities/0.log" Feb 27 19:44:37 crc kubenswrapper[4981]: E0227 19:44:37.630815 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:44:38 crc kubenswrapper[4981]: E0227 19:44:38.629460 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:44:44 crc kubenswrapper[4981]: I0227 19:44:44.496819 4981 generic.go:334] "Generic (PLEG): container finished" podID="916bf2c9-1243-489a-a087-90f6cfa99f40" containerID="002eae3b58d02a52130102f5b81b71638f2ca96d51e5987579f86d79719ac7a7" exitCode=0 Feb 27 19:44:44 crc kubenswrapper[4981]: I0227 19:44:44.496910 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537024-zfz4c" event={"ID":"916bf2c9-1243-489a-a087-90f6cfa99f40","Type":"ContainerDied","Data":"002eae3b58d02a52130102f5b81b71638f2ca96d51e5987579f86d79719ac7a7"} Feb 27 19:44:45 crc kubenswrapper[4981]: E0227 19:44:45.631428 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sw9dj" podUID="9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" Feb 27 19:44:45 crc kubenswrapper[4981]: I0227 19:44:45.747546 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537024-zfz4c" Feb 27 19:44:45 crc kubenswrapper[4981]: I0227 19:44:45.906281 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fz9k\" (UniqueName: \"kubernetes.io/projected/916bf2c9-1243-489a-a087-90f6cfa99f40-kube-api-access-4fz9k\") pod \"916bf2c9-1243-489a-a087-90f6cfa99f40\" (UID: \"916bf2c9-1243-489a-a087-90f6cfa99f40\") " Feb 27 19:44:45 crc kubenswrapper[4981]: I0227 19:44:45.923205 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/916bf2c9-1243-489a-a087-90f6cfa99f40-kube-api-access-4fz9k" (OuterVolumeSpecName: "kube-api-access-4fz9k") pod "916bf2c9-1243-489a-a087-90f6cfa99f40" (UID: "916bf2c9-1243-489a-a087-90f6cfa99f40"). InnerVolumeSpecName "kube-api-access-4fz9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:44:46 crc kubenswrapper[4981]: I0227 19:44:46.007992 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fz9k\" (UniqueName: \"kubernetes.io/projected/916bf2c9-1243-489a-a087-90f6cfa99f40-kube-api-access-4fz9k\") on node \"crc\" DevicePath \"\"" Feb 27 19:44:46 crc kubenswrapper[4981]: I0227 19:44:46.511128 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537024-zfz4c" event={"ID":"916bf2c9-1243-489a-a087-90f6cfa99f40","Type":"ContainerDied","Data":"d0cc480591c2056e6dc6ecc439b7bee8b5fcf4916af1519f43e16178c18d4e69"} Feb 27 19:44:46 crc kubenswrapper[4981]: I0227 19:44:46.511172 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0cc480591c2056e6dc6ecc439b7bee8b5fcf4916af1519f43e16178c18d4e69" Feb 27 19:44:46 crc kubenswrapper[4981]: I0227 19:44:46.511223 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537024-zfz4c" Feb 27 19:44:46 crc kubenswrapper[4981]: E0227 19:44:46.631927 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:44:46 crc kubenswrapper[4981]: I0227 19:44:46.815996 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537016-5jcs8"] Feb 27 19:44:46 crc kubenswrapper[4981]: I0227 19:44:46.823293 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537016-5jcs8"] Feb 27 19:44:47 crc kubenswrapper[4981]: I0227 19:44:47.637499 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e5cea75-9d0c-4908-9e15-e19cd3a1b925" path="/var/lib/kubelet/pods/2e5cea75-9d0c-4908-9e15-e19cd3a1b925/volumes" Feb 27 19:44:49 crc kubenswrapper[4981]: E0227 19:44:49.630353 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:44:51 crc kubenswrapper[4981]: E0227 19:44:51.635918 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:44:59 crc kubenswrapper[4981]: E0227 19:44:59.631469 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:45:00 crc kubenswrapper[4981]: I0227 19:45:00.145910 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537025-j854l"] Feb 27 19:45:00 crc kubenswrapper[4981]: E0227 19:45:00.146493 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916bf2c9-1243-489a-a087-90f6cfa99f40" containerName="oc" Feb 27 19:45:00 crc kubenswrapper[4981]: I0227 19:45:00.146582 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="916bf2c9-1243-489a-a087-90f6cfa99f40" containerName="oc" Feb 27 19:45:00 crc kubenswrapper[4981]: I0227 19:45:00.146794 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="916bf2c9-1243-489a-a087-90f6cfa99f40" containerName="oc" Feb 27 19:45:00 crc kubenswrapper[4981]: I0227 19:45:00.147348 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-j854l" Feb 27 19:45:00 crc kubenswrapper[4981]: I0227 19:45:00.150478 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 19:45:00 crc kubenswrapper[4981]: I0227 19:45:00.152271 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 19:45:00 crc kubenswrapper[4981]: I0227 19:45:00.157510 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537025-j854l"] Feb 27 19:45:00 crc kubenswrapper[4981]: I0227 19:45:00.209672 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n75qm\" (UniqueName: \"kubernetes.io/projected/994db529-e5b6-4fea-bf69-812e97ecb7a9-kube-api-access-n75qm\") pod \"collect-profiles-29537025-j854l\" (UID: \"994db529-e5b6-4fea-bf69-812e97ecb7a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-j854l" Feb 27 19:45:00 crc kubenswrapper[4981]: I0227 19:45:00.209736 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/994db529-e5b6-4fea-bf69-812e97ecb7a9-secret-volume\") pod \"collect-profiles-29537025-j854l\" (UID: \"994db529-e5b6-4fea-bf69-812e97ecb7a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-j854l" Feb 27 19:45:00 crc kubenswrapper[4981]: I0227 19:45:00.209766 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/994db529-e5b6-4fea-bf69-812e97ecb7a9-config-volume\") pod \"collect-profiles-29537025-j854l\" (UID: \"994db529-e5b6-4fea-bf69-812e97ecb7a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-j854l" Feb 27 19:45:00 crc kubenswrapper[4981]: I0227 19:45:00.310865 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n75qm\" (UniqueName: \"kubernetes.io/projected/994db529-e5b6-4fea-bf69-812e97ecb7a9-kube-api-access-n75qm\") pod \"collect-profiles-29537025-j854l\" (UID: \"994db529-e5b6-4fea-bf69-812e97ecb7a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-j854l" Feb 27 19:45:00 crc kubenswrapper[4981]: I0227 19:45:00.310934 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/994db529-e5b6-4fea-bf69-812e97ecb7a9-secret-volume\") pod \"collect-profiles-29537025-j854l\" (UID: \"994db529-e5b6-4fea-bf69-812e97ecb7a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-j854l" Feb 27 19:45:00 crc kubenswrapper[4981]: I0227 19:45:00.310973 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/994db529-e5b6-4fea-bf69-812e97ecb7a9-config-volume\") pod \"collect-profiles-29537025-j854l\" (UID: \"994db529-e5b6-4fea-bf69-812e97ecb7a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-j854l" Feb 27 19:45:00 crc kubenswrapper[4981]: I0227 19:45:00.312027 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/994db529-e5b6-4fea-bf69-812e97ecb7a9-config-volume\") pod \"collect-profiles-29537025-j854l\" (UID: \"994db529-e5b6-4fea-bf69-812e97ecb7a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-j854l" Feb 27 19:45:00 crc kubenswrapper[4981]: I0227 19:45:00.324983 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/994db529-e5b6-4fea-bf69-812e97ecb7a9-secret-volume\") pod \"collect-profiles-29537025-j854l\" (UID: \"994db529-e5b6-4fea-bf69-812e97ecb7a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-j854l" Feb 27 19:45:00 crc kubenswrapper[4981]: I0227 19:45:00.328376 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n75qm\" (UniqueName: \"kubernetes.io/projected/994db529-e5b6-4fea-bf69-812e97ecb7a9-kube-api-access-n75qm\") pod \"collect-profiles-29537025-j854l\" (UID: \"994db529-e5b6-4fea-bf69-812e97ecb7a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-j854l" Feb 27 19:45:00 crc kubenswrapper[4981]: I0227 19:45:00.466801 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-j854l" Feb 27 19:45:00 crc kubenswrapper[4981]: I0227 19:45:00.895862 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537025-j854l"] Feb 27 19:45:01 crc kubenswrapper[4981]: I0227 19:45:01.637182 4981 generic.go:334] "Generic (PLEG): container finished" podID="994db529-e5b6-4fea-bf69-812e97ecb7a9" containerID="69baaae896af1044cc2610a06e07e786beef73d276bde1d89f4e4ef43961b0aa" exitCode=0 Feb 27 19:45:01 crc kubenswrapper[4981]: I0227 19:45:01.645447 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-j854l" event={"ID":"994db529-e5b6-4fea-bf69-812e97ecb7a9","Type":"ContainerDied","Data":"69baaae896af1044cc2610a06e07e786beef73d276bde1d89f4e4ef43961b0aa"} Feb 27 19:45:01 crc kubenswrapper[4981]: I0227 19:45:01.645721 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-j854l" event={"ID":"994db529-e5b6-4fea-bf69-812e97ecb7a9","Type":"ContainerStarted","Data":"89620441b0b16ad3128dca99a1843490f2123477af1ed498300764f9d3dcf2ce"} Feb 27 19:45:02 crc kubenswrapper[4981]: E0227 19:45:02.629905 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:45:02 crc kubenswrapper[4981]: E0227 19:45:02.630069 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:45:02 crc kubenswrapper[4981]: I0227 19:45:02.644601 4981 generic.go:334] "Generic (PLEG): container finished" podID="9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" containerID="629772d24f0d83c4eef060ff92db8911d3f239fee9d9ca6ae6d7c1614580a87d" exitCode=0 Feb 27 19:45:02 crc kubenswrapper[4981]: I0227 19:45:02.644660 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw9dj" event={"ID":"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a","Type":"ContainerDied","Data":"629772d24f0d83c4eef060ff92db8911d3f239fee9d9ca6ae6d7c1614580a87d"} Feb 27 19:45:02 crc kubenswrapper[4981]: I0227 19:45:02.894119 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-j854l" Feb 27 19:45:03 crc kubenswrapper[4981]: I0227 19:45:03.050569 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/994db529-e5b6-4fea-bf69-812e97ecb7a9-secret-volume\") pod \"994db529-e5b6-4fea-bf69-812e97ecb7a9\" (UID: \"994db529-e5b6-4fea-bf69-812e97ecb7a9\") " Feb 27 19:45:03 crc kubenswrapper[4981]: I0227 19:45:03.050654 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n75qm\" (UniqueName: \"kubernetes.io/projected/994db529-e5b6-4fea-bf69-812e97ecb7a9-kube-api-access-n75qm\") pod \"994db529-e5b6-4fea-bf69-812e97ecb7a9\" (UID: \"994db529-e5b6-4fea-bf69-812e97ecb7a9\") " Feb 27 19:45:03 crc kubenswrapper[4981]: I0227 19:45:03.050735 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/994db529-e5b6-4fea-bf69-812e97ecb7a9-config-volume\") pod \"994db529-e5b6-4fea-bf69-812e97ecb7a9\" (UID: \"994db529-e5b6-4fea-bf69-812e97ecb7a9\") " Feb 27 19:45:03 crc kubenswrapper[4981]: I0227 19:45:03.051658 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/994db529-e5b6-4fea-bf69-812e97ecb7a9-config-volume" (OuterVolumeSpecName: "config-volume") pod "994db529-e5b6-4fea-bf69-812e97ecb7a9" (UID: "994db529-e5b6-4fea-bf69-812e97ecb7a9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:45:03 crc kubenswrapper[4981]: I0227 19:45:03.056843 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/994db529-e5b6-4fea-bf69-812e97ecb7a9-kube-api-access-n75qm" (OuterVolumeSpecName: "kube-api-access-n75qm") pod "994db529-e5b6-4fea-bf69-812e97ecb7a9" (UID: "994db529-e5b6-4fea-bf69-812e97ecb7a9"). InnerVolumeSpecName "kube-api-access-n75qm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:45:03 crc kubenswrapper[4981]: I0227 19:45:03.056977 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/994db529-e5b6-4fea-bf69-812e97ecb7a9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "994db529-e5b6-4fea-bf69-812e97ecb7a9" (UID: "994db529-e5b6-4fea-bf69-812e97ecb7a9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:45:03 crc kubenswrapper[4981]: I0227 19:45:03.153320 4981 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/994db529-e5b6-4fea-bf69-812e97ecb7a9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 19:45:03 crc kubenswrapper[4981]: I0227 19:45:03.153361 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n75qm\" (UniqueName: \"kubernetes.io/projected/994db529-e5b6-4fea-bf69-812e97ecb7a9-kube-api-access-n75qm\") on node \"crc\" DevicePath \"\"" Feb 27 19:45:03 crc kubenswrapper[4981]: I0227 19:45:03.153375 4981 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/994db529-e5b6-4fea-bf69-812e97ecb7a9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 19:45:03 crc kubenswrapper[4981]: I0227 19:45:03.651754 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw9dj" event={"ID":"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a","Type":"ContainerStarted","Data":"05bf7cdfe51481327c92af679bf529199fb5076549a334e5bfc5076274b36225"} Feb 27 19:45:03 crc kubenswrapper[4981]: I0227 19:45:03.653856 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-j854l" event={"ID":"994db529-e5b6-4fea-bf69-812e97ecb7a9","Type":"ContainerDied","Data":"89620441b0b16ad3128dca99a1843490f2123477af1ed498300764f9d3dcf2ce"} Feb 27 19:45:03 crc kubenswrapper[4981]: I0227 19:45:03.653884 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89620441b0b16ad3128dca99a1843490f2123477af1ed498300764f9d3dcf2ce" Feb 27 19:45:03 crc kubenswrapper[4981]: I0227 19:45:03.653924 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537025-j854l" Feb 27 19:45:03 crc kubenswrapper[4981]: I0227 19:45:03.677799 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sw9dj" podStartSLOduration=2.639354111 podStartE2EDuration="1m30.677779666s" podCreationTimestamp="2026-02-27 19:43:33 +0000 UTC" firstStartedPulling="2026-02-27 19:43:35.018527361 +0000 UTC m=+3514.497308511" lastFinishedPulling="2026-02-27 19:45:03.056952906 +0000 UTC m=+3602.535734066" observedRunningTime="2026-02-27 19:45:03.674284807 +0000 UTC m=+3603.153065967" watchObservedRunningTime="2026-02-27 19:45:03.677779666 +0000 UTC m=+3603.156560836" Feb 27 19:45:03 crc kubenswrapper[4981]: I0227 19:45:03.958043 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8"] Feb 27 19:45:03 crc kubenswrapper[4981]: I0227 19:45:03.962852 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536980-bf5w8"] Feb 27 19:45:04 crc kubenswrapper[4981]: I0227 19:45:04.125525 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sw9dj" Feb 27 19:45:04 crc kubenswrapper[4981]: I0227 19:45:04.125604 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sw9dj" Feb 27 19:45:05 crc kubenswrapper[4981]: I0227 19:45:05.179434 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-sw9dj" podUID="9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" containerName="registry-server" probeResult="failure" output=< Feb 27 19:45:05 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Feb 27 19:45:05 crc kubenswrapper[4981]: > Feb 27 19:45:05 crc kubenswrapper[4981]: I0227 19:45:05.639176 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa7d7b4b-cc16-4da4-94d1-4daae958bacf" path="/var/lib/kubelet/pods/aa7d7b4b-cc16-4da4-94d1-4daae958bacf/volumes" Feb 27 19:45:13 crc kubenswrapper[4981]: E0227 19:45:13.632388 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:45:14 crc kubenswrapper[4981]: I0227 19:45:14.169004 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sw9dj" Feb 27 19:45:14 crc kubenswrapper[4981]: I0227 19:45:14.210012 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sw9dj" Feb 27 19:45:14 crc kubenswrapper[4981]: I0227 19:45:14.399234 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sw9dj"] Feb 27 19:45:14 crc kubenswrapper[4981]: E0227 19:45:14.629543 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:45:15 crc kubenswrapper[4981]: I0227 19:45:15.759588 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sw9dj" podUID="9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" containerName="registry-server" containerID="cri-o://05bf7cdfe51481327c92af679bf529199fb5076549a334e5bfc5076274b36225" gracePeriod=2 Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.132725 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sw9dj" Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.260151 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f1a7e95-fd5e-440b-8df5-aebf383a2d8a-catalog-content\") pod \"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a\" (UID: \"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a\") " Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.260259 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f1a7e95-fd5e-440b-8df5-aebf383a2d8a-utilities\") pod \"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a\" (UID: \"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a\") " Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.260317 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdlsz\" (UniqueName: \"kubernetes.io/projected/9f1a7e95-fd5e-440b-8df5-aebf383a2d8a-kube-api-access-qdlsz\") pod \"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a\" (UID: \"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a\") " Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.261775 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f1a7e95-fd5e-440b-8df5-aebf383a2d8a-utilities" (OuterVolumeSpecName: "utilities") pod "9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" (UID: "9f1a7e95-fd5e-440b-8df5-aebf383a2d8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.265948 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f1a7e95-fd5e-440b-8df5-aebf383a2d8a-kube-api-access-qdlsz" (OuterVolumeSpecName: "kube-api-access-qdlsz") pod "9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" (UID: "9f1a7e95-fd5e-440b-8df5-aebf383a2d8a"). InnerVolumeSpecName "kube-api-access-qdlsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.283888 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f1a7e95-fd5e-440b-8df5-aebf383a2d8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" (UID: "9f1a7e95-fd5e-440b-8df5-aebf383a2d8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.361422 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f1a7e95-fd5e-440b-8df5-aebf383a2d8a-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.361461 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdlsz\" (UniqueName: \"kubernetes.io/projected/9f1a7e95-fd5e-440b-8df5-aebf383a2d8a-kube-api-access-qdlsz\") on node \"crc\" DevicePath \"\"" Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.361474 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f1a7e95-fd5e-440b-8df5-aebf383a2d8a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.767098 4981 generic.go:334] "Generic (PLEG): container finished" podID="9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" containerID="05bf7cdfe51481327c92af679bf529199fb5076549a334e5bfc5076274b36225" exitCode=0 Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.767140 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sw9dj" Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.767143 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw9dj" event={"ID":"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a","Type":"ContainerDied","Data":"05bf7cdfe51481327c92af679bf529199fb5076549a334e5bfc5076274b36225"} Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.767289 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw9dj" event={"ID":"9f1a7e95-fd5e-440b-8df5-aebf383a2d8a","Type":"ContainerDied","Data":"6011c1ea74d4a4230bef8a1555cacee48cdda42194ced988e99c8294cf657a5c"} Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.767326 4981 scope.go:117] "RemoveContainer" containerID="05bf7cdfe51481327c92af679bf529199fb5076549a334e5bfc5076274b36225" Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.796433 4981 scope.go:117] "RemoveContainer" containerID="629772d24f0d83c4eef060ff92db8911d3f239fee9d9ca6ae6d7c1614580a87d" Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.799938 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sw9dj"] Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.805891 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sw9dj"] Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.825604 4981 scope.go:117] "RemoveContainer" containerID="0bca60463545ce4225070a9d6cfecb3b4d50008b657c183ecf279984ad76953a" Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.841729 4981 scope.go:117] "RemoveContainer" containerID="05bf7cdfe51481327c92af679bf529199fb5076549a334e5bfc5076274b36225" Feb 27 19:45:16 crc kubenswrapper[4981]: E0227 19:45:16.842197 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05bf7cdfe51481327c92af679bf529199fb5076549a334e5bfc5076274b36225\": container with ID starting with 05bf7cdfe51481327c92af679bf529199fb5076549a334e5bfc5076274b36225 not found: ID does not exist" containerID="05bf7cdfe51481327c92af679bf529199fb5076549a334e5bfc5076274b36225" Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.842244 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05bf7cdfe51481327c92af679bf529199fb5076549a334e5bfc5076274b36225"} err="failed to get container status \"05bf7cdfe51481327c92af679bf529199fb5076549a334e5bfc5076274b36225\": rpc error: code = NotFound desc = could not find container \"05bf7cdfe51481327c92af679bf529199fb5076549a334e5bfc5076274b36225\": container with ID starting with 05bf7cdfe51481327c92af679bf529199fb5076549a334e5bfc5076274b36225 not found: ID does not exist" Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.842272 4981 scope.go:117] "RemoveContainer" containerID="629772d24f0d83c4eef060ff92db8911d3f239fee9d9ca6ae6d7c1614580a87d" Feb 27 19:45:16 crc kubenswrapper[4981]: E0227 19:45:16.842570 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"629772d24f0d83c4eef060ff92db8911d3f239fee9d9ca6ae6d7c1614580a87d\": container with ID starting with 629772d24f0d83c4eef060ff92db8911d3f239fee9d9ca6ae6d7c1614580a87d not found: ID does not exist" containerID="629772d24f0d83c4eef060ff92db8911d3f239fee9d9ca6ae6d7c1614580a87d" Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.842605 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"629772d24f0d83c4eef060ff92db8911d3f239fee9d9ca6ae6d7c1614580a87d"} err="failed to get container status \"629772d24f0d83c4eef060ff92db8911d3f239fee9d9ca6ae6d7c1614580a87d\": rpc error: code = NotFound desc = could not find container \"629772d24f0d83c4eef060ff92db8911d3f239fee9d9ca6ae6d7c1614580a87d\": container with ID starting with 629772d24f0d83c4eef060ff92db8911d3f239fee9d9ca6ae6d7c1614580a87d not found: ID does not exist" Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.842642 4981 scope.go:117] "RemoveContainer" containerID="0bca60463545ce4225070a9d6cfecb3b4d50008b657c183ecf279984ad76953a" Feb 27 19:45:16 crc kubenswrapper[4981]: E0227 19:45:16.842839 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bca60463545ce4225070a9d6cfecb3b4d50008b657c183ecf279984ad76953a\": container with ID starting with 0bca60463545ce4225070a9d6cfecb3b4d50008b657c183ecf279984ad76953a not found: ID does not exist" containerID="0bca60463545ce4225070a9d6cfecb3b4d50008b657c183ecf279984ad76953a" Feb 27 19:45:16 crc kubenswrapper[4981]: I0227 19:45:16.842859 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bca60463545ce4225070a9d6cfecb3b4d50008b657c183ecf279984ad76953a"} err="failed to get container status \"0bca60463545ce4225070a9d6cfecb3b4d50008b657c183ecf279984ad76953a\": rpc error: code = NotFound desc = could not find container \"0bca60463545ce4225070a9d6cfecb3b4d50008b657c183ecf279984ad76953a\": container with ID starting with 0bca60463545ce4225070a9d6cfecb3b4d50008b657c183ecf279984ad76953a not found: ID does not exist" Feb 27 19:45:17 crc kubenswrapper[4981]: E0227 19:45:17.631257 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:45:17 crc kubenswrapper[4981]: I0227 19:45:17.640455 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" path="/var/lib/kubelet/pods/9f1a7e95-fd5e-440b-8df5-aebf383a2d8a/volumes" Feb 27 19:45:23 crc kubenswrapper[4981]: I0227 19:45:23.668936 4981 scope.go:117] "RemoveContainer" containerID="d41c141f991aab917bae90260131f1a15cfe5240d21137814ca2c7d35650c59e" Feb 27 19:45:23 crc kubenswrapper[4981]: I0227 19:45:23.714677 4981 scope.go:117] "RemoveContainer" containerID="f493742b48afcf508ee719d3fbf4500365b35a50a93d137536fdaca1cd73f69c" Feb 27 19:45:24 crc kubenswrapper[4981]: E0227 19:45:24.630624 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:45:29 crc kubenswrapper[4981]: E0227 19:45:29.586285 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:45:29 crc kubenswrapper[4981]: E0227 19:45:29.586719 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:45:29 crc kubenswrapper[4981]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:45:29 crc kubenswrapper[4981]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qphf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537022-gk6n2_openshift-infra(e853497d-5551-44e1-82d2-9915151f5e46): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:45:29 crc kubenswrapper[4981]: > logger="UnhandledError" Feb 27 19:45:29 crc kubenswrapper[4981]: E0227 19:45:29.587876 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:45:31 crc kubenswrapper[4981]: E0227 19:45:31.633938 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:45:38 crc kubenswrapper[4981]: E0227 19:45:38.861428 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:45:38 crc kubenswrapper[4981]: E0227 19:45:38.862190 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vm9pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-s7jgl_openshift-marketplace(6c006c9c-d6e0-46b9-af87-487c821d5593): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:45:38 crc kubenswrapper[4981]: E0227 19:45:38.863601 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:45:40 crc kubenswrapper[4981]: E0227 19:45:40.632645 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:45:43 crc kubenswrapper[4981]: E0227 19:45:43.341667 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 19:45:43 crc kubenswrapper[4981]: E0227 19:45:43.342271 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrpvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f9ltj_openshift-marketplace(64c273de-1f65-4ec7-b2a0-c070e4d29ce6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:45:43 crc kubenswrapper[4981]: E0227 19:45:43.343745 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:45:46 crc kubenswrapper[4981]: I0227 19:45:46.009419 4981 generic.go:334] "Generic (PLEG): container finished" podID="47f8e1a3-65ad-49a9-8a05-b927be5a5373" containerID="6642535209ddf82451b534b5e0b0cdc8f44e22ab0f4b9313b765e6b435bf831d" exitCode=0 Feb 27 19:45:46 crc kubenswrapper[4981]: I0227 19:45:46.009475 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tpczw/must-gather-hrzrj" event={"ID":"47f8e1a3-65ad-49a9-8a05-b927be5a5373","Type":"ContainerDied","Data":"6642535209ddf82451b534b5e0b0cdc8f44e22ab0f4b9313b765e6b435bf831d"} Feb 27 19:45:46 crc kubenswrapper[4981]: I0227 19:45:46.010055 4981 scope.go:117] "RemoveContainer" containerID="6642535209ddf82451b534b5e0b0cdc8f44e22ab0f4b9313b765e6b435bf831d" Feb 27 19:45:46 crc kubenswrapper[4981]: I0227 19:45:46.823677 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tpczw_must-gather-hrzrj_47f8e1a3-65ad-49a9-8a05-b927be5a5373/gather/0.log" Feb 27 19:45:50 crc kubenswrapper[4981]: I0227 19:45:50.248837 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:45:50 crc kubenswrapper[4981]: I0227 19:45:50.249477 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:45:53 crc kubenswrapper[4981]: E0227 19:45:53.632243 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:45:53 crc kubenswrapper[4981]: E0227 19:45:53.632338 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:45:54 crc kubenswrapper[4981]: I0227 19:45:54.317861 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tpczw/must-gather-hrzrj"] Feb 27 19:45:54 crc kubenswrapper[4981]: I0227 19:45:54.318422 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tpczw/must-gather-hrzrj" podUID="47f8e1a3-65ad-49a9-8a05-b927be5a5373" containerName="copy" containerID="cri-o://cc5ccc2ac1adad43ff5fdeea62e47ae1f3a52f09c767f1c3f4ad4c7f4165074b" gracePeriod=2 Feb 27 19:45:54 crc kubenswrapper[4981]: I0227 19:45:54.323836 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tpczw/must-gather-hrzrj"] Feb 27 19:45:54 crc kubenswrapper[4981]: E0227 19:45:54.632732 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:45:54 crc kubenswrapper[4981]: I0227 19:45:54.700739 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tpczw_must-gather-hrzrj_47f8e1a3-65ad-49a9-8a05-b927be5a5373/copy/0.log" Feb 27 19:45:54 crc kubenswrapper[4981]: I0227 19:45:54.701357 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tpczw/must-gather-hrzrj" Feb 27 19:45:54 crc kubenswrapper[4981]: I0227 19:45:54.791695 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpflm\" (UniqueName: \"kubernetes.io/projected/47f8e1a3-65ad-49a9-8a05-b927be5a5373-kube-api-access-xpflm\") pod \"47f8e1a3-65ad-49a9-8a05-b927be5a5373\" (UID: \"47f8e1a3-65ad-49a9-8a05-b927be5a5373\") " Feb 27 19:45:54 crc kubenswrapper[4981]: I0227 19:45:54.791744 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47f8e1a3-65ad-49a9-8a05-b927be5a5373-must-gather-output\") pod \"47f8e1a3-65ad-49a9-8a05-b927be5a5373\" (UID: \"47f8e1a3-65ad-49a9-8a05-b927be5a5373\") " Feb 27 19:45:54 crc kubenswrapper[4981]: I0227 19:45:54.798563 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47f8e1a3-65ad-49a9-8a05-b927be5a5373-kube-api-access-xpflm" (OuterVolumeSpecName: "kube-api-access-xpflm") pod "47f8e1a3-65ad-49a9-8a05-b927be5a5373" (UID: "47f8e1a3-65ad-49a9-8a05-b927be5a5373"). InnerVolumeSpecName "kube-api-access-xpflm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:45:54 crc kubenswrapper[4981]: I0227 19:45:54.892744 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpflm\" (UniqueName: \"kubernetes.io/projected/47f8e1a3-65ad-49a9-8a05-b927be5a5373-kube-api-access-xpflm\") on node \"crc\" DevicePath \"\"" Feb 27 19:45:54 crc kubenswrapper[4981]: I0227 19:45:54.900190 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47f8e1a3-65ad-49a9-8a05-b927be5a5373-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "47f8e1a3-65ad-49a9-8a05-b927be5a5373" (UID: "47f8e1a3-65ad-49a9-8a05-b927be5a5373"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:45:54 crc kubenswrapper[4981]: I0227 19:45:54.993717 4981 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47f8e1a3-65ad-49a9-8a05-b927be5a5373-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 27 19:45:55 crc kubenswrapper[4981]: I0227 19:45:55.080478 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tpczw_must-gather-hrzrj_47f8e1a3-65ad-49a9-8a05-b927be5a5373/copy/0.log" Feb 27 19:45:55 crc kubenswrapper[4981]: I0227 19:45:55.081181 4981 generic.go:334] "Generic (PLEG): container finished" podID="47f8e1a3-65ad-49a9-8a05-b927be5a5373" containerID="cc5ccc2ac1adad43ff5fdeea62e47ae1f3a52f09c767f1c3f4ad4c7f4165074b" exitCode=143 Feb 27 19:45:55 crc kubenswrapper[4981]: I0227 19:45:55.081234 4981 scope.go:117] "RemoveContainer" containerID="cc5ccc2ac1adad43ff5fdeea62e47ae1f3a52f09c767f1c3f4ad4c7f4165074b" Feb 27 19:45:55 crc kubenswrapper[4981]: I0227 19:45:55.081249 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tpczw/must-gather-hrzrj" Feb 27 19:45:55 crc kubenswrapper[4981]: I0227 19:45:55.099580 4981 scope.go:117] "RemoveContainer" containerID="6642535209ddf82451b534b5e0b0cdc8f44e22ab0f4b9313b765e6b435bf831d" Feb 27 19:45:55 crc kubenswrapper[4981]: I0227 19:45:55.155123 4981 scope.go:117] "RemoveContainer" containerID="cc5ccc2ac1adad43ff5fdeea62e47ae1f3a52f09c767f1c3f4ad4c7f4165074b" Feb 27 19:45:55 crc kubenswrapper[4981]: E0227 19:45:55.155678 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc5ccc2ac1adad43ff5fdeea62e47ae1f3a52f09c767f1c3f4ad4c7f4165074b\": container with ID starting with cc5ccc2ac1adad43ff5fdeea62e47ae1f3a52f09c767f1c3f4ad4c7f4165074b not found: ID does not exist" containerID="cc5ccc2ac1adad43ff5fdeea62e47ae1f3a52f09c767f1c3f4ad4c7f4165074b" Feb 27 19:45:55 crc kubenswrapper[4981]: I0227 19:45:55.155719 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc5ccc2ac1adad43ff5fdeea62e47ae1f3a52f09c767f1c3f4ad4c7f4165074b"} err="failed to get container status \"cc5ccc2ac1adad43ff5fdeea62e47ae1f3a52f09c767f1c3f4ad4c7f4165074b\": rpc error: code = NotFound desc = could not find container \"cc5ccc2ac1adad43ff5fdeea62e47ae1f3a52f09c767f1c3f4ad4c7f4165074b\": container with ID starting with cc5ccc2ac1adad43ff5fdeea62e47ae1f3a52f09c767f1c3f4ad4c7f4165074b not found: ID does not exist" Feb 27 19:45:55 crc kubenswrapper[4981]: I0227 19:45:55.155744 4981 scope.go:117] "RemoveContainer" containerID="6642535209ddf82451b534b5e0b0cdc8f44e22ab0f4b9313b765e6b435bf831d" Feb 27 19:45:55 crc kubenswrapper[4981]: E0227 19:45:55.156382 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6642535209ddf82451b534b5e0b0cdc8f44e22ab0f4b9313b765e6b435bf831d\": container with ID starting with 6642535209ddf82451b534b5e0b0cdc8f44e22ab0f4b9313b765e6b435bf831d not found: ID does not exist" containerID="6642535209ddf82451b534b5e0b0cdc8f44e22ab0f4b9313b765e6b435bf831d" Feb 27 19:45:55 crc kubenswrapper[4981]: I0227 19:45:55.156445 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6642535209ddf82451b534b5e0b0cdc8f44e22ab0f4b9313b765e6b435bf831d"} err="failed to get container status \"6642535209ddf82451b534b5e0b0cdc8f44e22ab0f4b9313b765e6b435bf831d\": rpc error: code = NotFound desc = could not find container \"6642535209ddf82451b534b5e0b0cdc8f44e22ab0f4b9313b765e6b435bf831d\": container with ID starting with 6642535209ddf82451b534b5e0b0cdc8f44e22ab0f4b9313b765e6b435bf831d not found: ID does not exist" Feb 27 19:45:55 crc kubenswrapper[4981]: I0227 19:45:55.636545 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47f8e1a3-65ad-49a9-8a05-b927be5a5373" path="/var/lib/kubelet/pods/47f8e1a3-65ad-49a9-8a05-b927be5a5373/volumes" Feb 27 19:46:00 crc kubenswrapper[4981]: I0227 19:46:00.173487 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537026-spk6l"] Feb 27 19:46:00 crc kubenswrapper[4981]: E0227 19:46:00.174310 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47f8e1a3-65ad-49a9-8a05-b927be5a5373" containerName="gather" Feb 27 19:46:00 crc kubenswrapper[4981]: I0227 19:46:00.174332 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="47f8e1a3-65ad-49a9-8a05-b927be5a5373" containerName="gather" Feb 27 19:46:00 crc kubenswrapper[4981]: E0227 19:46:00.174364 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" containerName="extract-utilities" Feb 27 19:46:00 crc kubenswrapper[4981]: I0227 19:46:00.174376 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" containerName="extract-utilities" Feb 27 19:46:00 crc kubenswrapper[4981]: E0227 19:46:00.174396 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" containerName="extract-content" Feb 27 19:46:00 crc kubenswrapper[4981]: I0227 19:46:00.174408 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" containerName="extract-content" Feb 27 19:46:00 crc kubenswrapper[4981]: E0227 19:46:00.174444 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="994db529-e5b6-4fea-bf69-812e97ecb7a9" containerName="collect-profiles" Feb 27 19:46:00 crc kubenswrapper[4981]: I0227 19:46:00.174456 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="994db529-e5b6-4fea-bf69-812e97ecb7a9" containerName="collect-profiles" Feb 27 19:46:00 crc kubenswrapper[4981]: E0227 19:46:00.174477 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" containerName="registry-server" Feb 27 19:46:00 crc kubenswrapper[4981]: I0227 19:46:00.174488 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" containerName="registry-server" Feb 27 19:46:00 crc kubenswrapper[4981]: E0227 19:46:00.174510 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47f8e1a3-65ad-49a9-8a05-b927be5a5373" containerName="copy" Feb 27 19:46:00 crc kubenswrapper[4981]: I0227 19:46:00.174521 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="47f8e1a3-65ad-49a9-8a05-b927be5a5373" containerName="copy" Feb 27 19:46:00 crc kubenswrapper[4981]: I0227 19:46:00.174742 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="47f8e1a3-65ad-49a9-8a05-b927be5a5373" containerName="copy" Feb 27 19:46:00 crc kubenswrapper[4981]: I0227 19:46:00.174773 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="994db529-e5b6-4fea-bf69-812e97ecb7a9" containerName="collect-profiles" Feb 27 19:46:00 crc kubenswrapper[4981]: I0227 19:46:00.174794 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f1a7e95-fd5e-440b-8df5-aebf383a2d8a" containerName="registry-server" Feb 27 19:46:00 crc kubenswrapper[4981]: I0227 19:46:00.174817 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="47f8e1a3-65ad-49a9-8a05-b927be5a5373" containerName="gather" Feb 27 19:46:00 crc kubenswrapper[4981]: I0227 19:46:00.175550 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537026-spk6l" Feb 27 19:46:00 crc kubenswrapper[4981]: I0227 19:46:00.184173 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537026-spk6l"] Feb 27 19:46:00 crc kubenswrapper[4981]: I0227 19:46:00.362822 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2plh\" (UniqueName: \"kubernetes.io/projected/2019d976-62d6-4efd-b601-5d43bfd19a3c-kube-api-access-h2plh\") pod \"auto-csr-approver-29537026-spk6l\" (UID: \"2019d976-62d6-4efd-b601-5d43bfd19a3c\") " pod="openshift-infra/auto-csr-approver-29537026-spk6l" Feb 27 19:46:00 crc kubenswrapper[4981]: I0227 19:46:00.464181 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2plh\" (UniqueName: \"kubernetes.io/projected/2019d976-62d6-4efd-b601-5d43bfd19a3c-kube-api-access-h2plh\") pod \"auto-csr-approver-29537026-spk6l\" (UID: \"2019d976-62d6-4efd-b601-5d43bfd19a3c\") " pod="openshift-infra/auto-csr-approver-29537026-spk6l" Feb 27 19:46:00 crc kubenswrapper[4981]: I0227 19:46:00.483647 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2plh\" (UniqueName: \"kubernetes.io/projected/2019d976-62d6-4efd-b601-5d43bfd19a3c-kube-api-access-h2plh\") pod \"auto-csr-approver-29537026-spk6l\" (UID: \"2019d976-62d6-4efd-b601-5d43bfd19a3c\") " pod="openshift-infra/auto-csr-approver-29537026-spk6l" Feb 27 19:46:00 crc kubenswrapper[4981]: I0227 19:46:00.495323 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537026-spk6l" Feb 27 19:46:00 crc kubenswrapper[4981]: W0227 19:46:00.899840 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2019d976_62d6_4efd_b601_5d43bfd19a3c.slice/crio-07432f7366113c5409864fa806e4a092704b7b190e4a9631f3cf82665d7cb96b WatchSource:0}: Error finding container 07432f7366113c5409864fa806e4a092704b7b190e4a9631f3cf82665d7cb96b: Status 404 returned error can't find the container with id 07432f7366113c5409864fa806e4a092704b7b190e4a9631f3cf82665d7cb96b Feb 27 19:46:00 crc kubenswrapper[4981]: I0227 19:46:00.902215 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537026-spk6l"] Feb 27 19:46:01 crc kubenswrapper[4981]: I0227 19:46:01.123414 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537026-spk6l" event={"ID":"2019d976-62d6-4efd-b601-5d43bfd19a3c","Type":"ContainerStarted","Data":"07432f7366113c5409864fa806e4a092704b7b190e4a9631f3cf82665d7cb96b"} Feb 27 19:46:01 crc kubenswrapper[4981]: E0227 19:46:01.827813 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:46:01 crc kubenswrapper[4981]: E0227 19:46:01.829101 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:46:01 crc kubenswrapper[4981]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:46:01 crc kubenswrapper[4981]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h2plh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537026-spk6l_openshift-infra(2019d976-62d6-4efd-b601-5d43bfd19a3c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:46:01 crc kubenswrapper[4981]: > logger="UnhandledError" Feb 27 19:46:01 crc kubenswrapper[4981]: E0227 19:46:01.830434 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537026-spk6l" podUID="2019d976-62d6-4efd-b601-5d43bfd19a3c" Feb 27 19:46:02 crc kubenswrapper[4981]: E0227 19:46:02.139098 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537026-spk6l" podUID="2019d976-62d6-4efd-b601-5d43bfd19a3c" Feb 27 19:46:06 crc kubenswrapper[4981]: E0227 19:46:06.630703 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:46:07 crc kubenswrapper[4981]: E0227 19:46:07.629301 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:46:09 crc kubenswrapper[4981]: E0227 19:46:09.629705 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:46:17 crc kubenswrapper[4981]: E0227 19:46:17.576348 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:46:17 crc kubenswrapper[4981]: E0227 19:46:17.576968 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:46:17 crc kubenswrapper[4981]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:46:17 crc kubenswrapper[4981]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h2plh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537026-spk6l_openshift-infra(2019d976-62d6-4efd-b601-5d43bfd19a3c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:46:17 crc kubenswrapper[4981]: > logger="UnhandledError" Feb 27 19:46:17 crc kubenswrapper[4981]: E0227 19:46:17.578349 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537026-spk6l" podUID="2019d976-62d6-4efd-b601-5d43bfd19a3c" Feb 27 19:46:18 crc kubenswrapper[4981]: E0227 19:46:18.631621 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:46:20 crc kubenswrapper[4981]: I0227 19:46:20.249208 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:46:20 crc kubenswrapper[4981]: I0227 19:46:20.249551 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:46:22 crc kubenswrapper[4981]: E0227 19:46:22.631121 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:46:23 crc kubenswrapper[4981]: E0227 19:46:23.631257 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:46:32 crc kubenswrapper[4981]: E0227 19:46:32.630794 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537026-spk6l" podUID="2019d976-62d6-4efd-b601-5d43bfd19a3c" Feb 27 19:46:33 crc kubenswrapper[4981]: E0227 19:46:33.630793 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:46:34 crc kubenswrapper[4981]: E0227 19:46:34.629755 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:46:35 crc kubenswrapper[4981]: E0227 19:46:35.631517 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:46:45 crc kubenswrapper[4981]: E0227 19:46:45.629995 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:46:45 crc kubenswrapper[4981]: E0227 19:46:45.630143 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:46:47 crc kubenswrapper[4981]: E0227 19:46:47.631245 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:46:48 crc kubenswrapper[4981]: E0227 19:46:48.906158 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:46:48 crc kubenswrapper[4981]: E0227 19:46:48.906314 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:46:48 crc kubenswrapper[4981]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:46:48 crc kubenswrapper[4981]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h2plh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537026-spk6l_openshift-infra(2019d976-62d6-4efd-b601-5d43bfd19a3c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:46:48 crc kubenswrapper[4981]: > logger="UnhandledError" Feb 27 19:46:48 crc kubenswrapper[4981]: E0227 19:46:48.908278 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537026-spk6l" podUID="2019d976-62d6-4efd-b601-5d43bfd19a3c" Feb 27 19:46:50 crc kubenswrapper[4981]: I0227 19:46:50.249158 4981 patch_prober.go:28] interesting pod/machine-config-daemon-5pm8g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:46:50 crc kubenswrapper[4981]: I0227 19:46:50.249248 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:46:50 crc kubenswrapper[4981]: I0227 19:46:50.249308 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" Feb 27 19:46:50 crc kubenswrapper[4981]: I0227 19:46:50.250134 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"810a98630d243f775d15a5f7a0dbc4550e506380d2e30e6da6d41ffca9dc5d6d"} pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:46:50 crc kubenswrapper[4981]: I0227 19:46:50.250225 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerName="machine-config-daemon" containerID="cri-o://810a98630d243f775d15a5f7a0dbc4550e506380d2e30e6da6d41ffca9dc5d6d" gracePeriod=600 Feb 27 19:46:50 crc kubenswrapper[4981]: E0227 19:46:50.375360 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:46:50 crc kubenswrapper[4981]: I0227 19:46:50.466617 4981 generic.go:334] "Generic (PLEG): container finished" podID="1fefdc04-8285-4630-83d3-494dcc0216f6" containerID="810a98630d243f775d15a5f7a0dbc4550e506380d2e30e6da6d41ffca9dc5d6d" exitCode=0 Feb 27 19:46:50 crc kubenswrapper[4981]: I0227 19:46:50.466660 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" event={"ID":"1fefdc04-8285-4630-83d3-494dcc0216f6","Type":"ContainerDied","Data":"810a98630d243f775d15a5f7a0dbc4550e506380d2e30e6da6d41ffca9dc5d6d"} Feb 27 19:46:50 crc kubenswrapper[4981]: I0227 19:46:50.466713 4981 scope.go:117] "RemoveContainer" containerID="7e7587b85d64eef60150f43bd74b50c7e64a21cc95eaa522a2a9bc99615746d6" Feb 27 19:46:50 crc kubenswrapper[4981]: I0227 19:46:50.468604 4981 scope.go:117] "RemoveContainer" containerID="810a98630d243f775d15a5f7a0dbc4550e506380d2e30e6da6d41ffca9dc5d6d" Feb 27 19:46:50 crc kubenswrapper[4981]: E0227 19:46:50.469309 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:46:57 crc kubenswrapper[4981]: E0227 19:46:57.630525 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:47:00 crc kubenswrapper[4981]: E0227 19:47:00.630892 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:47:02 crc kubenswrapper[4981]: E0227 19:47:02.629741 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:47:03 crc kubenswrapper[4981]: E0227 19:47:03.631576 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537026-spk6l" podUID="2019d976-62d6-4efd-b601-5d43bfd19a3c" Feb 27 19:47:05 crc kubenswrapper[4981]: I0227 19:47:05.628492 4981 scope.go:117] "RemoveContainer" containerID="810a98630d243f775d15a5f7a0dbc4550e506380d2e30e6da6d41ffca9dc5d6d" Feb 27 19:47:05 crc kubenswrapper[4981]: E0227 19:47:05.628921 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:47:08 crc kubenswrapper[4981]: E0227 19:47:08.631556 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:47:12 crc kubenswrapper[4981]: E0227 19:47:12.629882 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:47:13 crc kubenswrapper[4981]: E0227 19:47:13.629828 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:47:16 crc kubenswrapper[4981]: I0227 19:47:16.628645 4981 scope.go:117] "RemoveContainer" containerID="810a98630d243f775d15a5f7a0dbc4550e506380d2e30e6da6d41ffca9dc5d6d" Feb 27 19:47:16 crc kubenswrapper[4981]: E0227 19:47:16.629218 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:47:16 crc kubenswrapper[4981]: E0227 19:47:16.630592 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537026-spk6l" podUID="2019d976-62d6-4efd-b601-5d43bfd19a3c" Feb 27 19:47:22 crc kubenswrapper[4981]: E0227 19:47:22.632351 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:47:23 crc kubenswrapper[4981]: E0227 19:47:23.630623 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:47:28 crc kubenswrapper[4981]: E0227 19:47:28.630590 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:47:29 crc kubenswrapper[4981]: I0227 19:47:29.628437 4981 scope.go:117] "RemoveContainer" containerID="810a98630d243f775d15a5f7a0dbc4550e506380d2e30e6da6d41ffca9dc5d6d" Feb 27 19:47:29 crc kubenswrapper[4981]: E0227 19:47:29.628706 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:47:32 crc kubenswrapper[4981]: E0227 19:47:32.481077 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:47:32 crc kubenswrapper[4981]: E0227 19:47:32.481209 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:47:32 crc kubenswrapper[4981]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:47:32 crc kubenswrapper[4981]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h2plh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537026-spk6l_openshift-infra(2019d976-62d6-4efd-b601-5d43bfd19a3c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:47:32 crc kubenswrapper[4981]: > logger="UnhandledError" Feb 27 19:47:32 crc kubenswrapper[4981]: E0227 19:47:32.482383 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537026-spk6l" podUID="2019d976-62d6-4efd-b601-5d43bfd19a3c" Feb 27 19:47:32 crc kubenswrapper[4981]: I0227 19:47:32.828990 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zzslc"] Feb 27 19:47:32 crc kubenswrapper[4981]: I0227 19:47:32.831274 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzslc" Feb 27 19:47:32 crc kubenswrapper[4981]: I0227 19:47:32.843429 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zzslc"] Feb 27 19:47:32 crc kubenswrapper[4981]: I0227 19:47:32.975756 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b85f001-b726-4b01-9bfa-64731c653fd1-catalog-content\") pod \"certified-operators-zzslc\" (UID: \"9b85f001-b726-4b01-9bfa-64731c653fd1\") " pod="openshift-marketplace/certified-operators-zzslc" Feb 27 19:47:32 crc kubenswrapper[4981]: I0227 19:47:32.975828 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b85f001-b726-4b01-9bfa-64731c653fd1-utilities\") pod \"certified-operators-zzslc\" (UID: \"9b85f001-b726-4b01-9bfa-64731c653fd1\") " pod="openshift-marketplace/certified-operators-zzslc" Feb 27 19:47:32 crc kubenswrapper[4981]: I0227 19:47:32.975876 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm6fw\" (UniqueName: \"kubernetes.io/projected/9b85f001-b726-4b01-9bfa-64731c653fd1-kube-api-access-tm6fw\") pod \"certified-operators-zzslc\" (UID: \"9b85f001-b726-4b01-9bfa-64731c653fd1\") " pod="openshift-marketplace/certified-operators-zzslc" Feb 27 19:47:33 crc kubenswrapper[4981]: I0227 19:47:33.077100 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b85f001-b726-4b01-9bfa-64731c653fd1-catalog-content\") pod \"certified-operators-zzslc\" (UID: \"9b85f001-b726-4b01-9bfa-64731c653fd1\") " pod="openshift-marketplace/certified-operators-zzslc" Feb 27 19:47:33 crc kubenswrapper[4981]: I0227 19:47:33.077154 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b85f001-b726-4b01-9bfa-64731c653fd1-utilities\") pod \"certified-operators-zzslc\" (UID: \"9b85f001-b726-4b01-9bfa-64731c653fd1\") " pod="openshift-marketplace/certified-operators-zzslc" Feb 27 19:47:33 crc kubenswrapper[4981]: I0227 19:47:33.077187 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm6fw\" (UniqueName: \"kubernetes.io/projected/9b85f001-b726-4b01-9bfa-64731c653fd1-kube-api-access-tm6fw\") pod \"certified-operators-zzslc\" (UID: \"9b85f001-b726-4b01-9bfa-64731c653fd1\") " pod="openshift-marketplace/certified-operators-zzslc" Feb 27 19:47:33 crc kubenswrapper[4981]: I0227 19:47:33.077626 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b85f001-b726-4b01-9bfa-64731c653fd1-catalog-content\") pod \"certified-operators-zzslc\" (UID: \"9b85f001-b726-4b01-9bfa-64731c653fd1\") " pod="openshift-marketplace/certified-operators-zzslc" Feb 27 19:47:33 crc kubenswrapper[4981]: I0227 19:47:33.077688 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b85f001-b726-4b01-9bfa-64731c653fd1-utilities\") pod \"certified-operators-zzslc\" (UID: \"9b85f001-b726-4b01-9bfa-64731c653fd1\") " pod="openshift-marketplace/certified-operators-zzslc" Feb 27 19:47:33 crc kubenswrapper[4981]: I0227 19:47:33.101201 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm6fw\" (UniqueName: \"kubernetes.io/projected/9b85f001-b726-4b01-9bfa-64731c653fd1-kube-api-access-tm6fw\") pod \"certified-operators-zzslc\" (UID: \"9b85f001-b726-4b01-9bfa-64731c653fd1\") " pod="openshift-marketplace/certified-operators-zzslc" Feb 27 19:47:33 crc kubenswrapper[4981]: I0227 19:47:33.153880 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzslc" Feb 27 19:47:33 crc kubenswrapper[4981]: E0227 19:47:33.629655 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:47:33 crc kubenswrapper[4981]: I0227 19:47:33.680273 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zzslc"] Feb 27 19:47:33 crc kubenswrapper[4981]: W0227 19:47:33.695804 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b85f001_b726_4b01_9bfa_64731c653fd1.slice/crio-fcba374bb63c47e03729724c110be559db482cb9cc7f0e5fccc1ee3901cffab1 WatchSource:0}: Error finding container fcba374bb63c47e03729724c110be559db482cb9cc7f0e5fccc1ee3901cffab1: Status 404 returned error can't find the container with id fcba374bb63c47e03729724c110be559db482cb9cc7f0e5fccc1ee3901cffab1 Feb 27 19:47:33 crc kubenswrapper[4981]: I0227 19:47:33.767399 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzslc" event={"ID":"9b85f001-b726-4b01-9bfa-64731c653fd1","Type":"ContainerStarted","Data":"fcba374bb63c47e03729724c110be559db482cb9cc7f0e5fccc1ee3901cffab1"} Feb 27 19:47:34 crc kubenswrapper[4981]: E0227 19:47:34.629270 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:47:34 crc kubenswrapper[4981]: I0227 19:47:34.773890 4981 generic.go:334] "Generic (PLEG): container finished" podID="9b85f001-b726-4b01-9bfa-64731c653fd1" containerID="9f5da6f64f2828965ee33067068b7bcbc31cc47a383d72af2e5f3d77ce5dcc56" exitCode=0 Feb 27 19:47:34 crc kubenswrapper[4981]: I0227 19:47:34.773932 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzslc" event={"ID":"9b85f001-b726-4b01-9bfa-64731c653fd1","Type":"ContainerDied","Data":"9f5da6f64f2828965ee33067068b7bcbc31cc47a383d72af2e5f3d77ce5dcc56"} Feb 27 19:47:35 crc kubenswrapper[4981]: E0227 19:47:35.648438 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 19:47:35 crc kubenswrapper[4981]: E0227 19:47:35.648621 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tm6fw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-zzslc_openshift-marketplace(9b85f001-b726-4b01-9bfa-64731c653fd1): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:47:35 crc kubenswrapper[4981]: E0227 19:47:35.649818 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-zzslc" podUID="9b85f001-b726-4b01-9bfa-64731c653fd1" Feb 27 19:47:35 crc kubenswrapper[4981]: E0227 19:47:35.782770 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zzslc" podUID="9b85f001-b726-4b01-9bfa-64731c653fd1" Feb 27 19:47:41 crc kubenswrapper[4981]: I0227 19:47:41.634345 4981 scope.go:117] "RemoveContainer" containerID="810a98630d243f775d15a5f7a0dbc4550e506380d2e30e6da6d41ffca9dc5d6d" Feb 27 19:47:41 crc kubenswrapper[4981]: E0227 19:47:41.635783 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:47:41 crc kubenswrapper[4981]: E0227 19:47:41.635908 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:47:44 crc kubenswrapper[4981]: E0227 19:47:44.630632 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537026-spk6l" podUID="2019d976-62d6-4efd-b601-5d43bfd19a3c" Feb 27 19:47:46 crc kubenswrapper[4981]: E0227 19:47:46.631128 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:47:47 crc kubenswrapper[4981]: I0227 19:47:47.868837 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzslc" event={"ID":"9b85f001-b726-4b01-9bfa-64731c653fd1","Type":"ContainerStarted","Data":"1c0b4b90edf645f12b3909f6e720bc21e88d940525306682b050c344f976b8f5"} Feb 27 19:47:48 crc kubenswrapper[4981]: E0227 19:47:48.630265 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:47:48 crc kubenswrapper[4981]: I0227 19:47:48.879082 4981 generic.go:334] "Generic (PLEG): container finished" podID="9b85f001-b726-4b01-9bfa-64731c653fd1" containerID="1c0b4b90edf645f12b3909f6e720bc21e88d940525306682b050c344f976b8f5" exitCode=0 Feb 27 19:47:48 crc kubenswrapper[4981]: I0227 19:47:48.879091 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzslc" event={"ID":"9b85f001-b726-4b01-9bfa-64731c653fd1","Type":"ContainerDied","Data":"1c0b4b90edf645f12b3909f6e720bc21e88d940525306682b050c344f976b8f5"} Feb 27 19:47:49 crc kubenswrapper[4981]: I0227 19:47:49.886995 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzslc" event={"ID":"9b85f001-b726-4b01-9bfa-64731c653fd1","Type":"ContainerStarted","Data":"56a0dc92414fc0607b1207977663e9f4a408530bfce1b06578e42e01dff8a969"} Feb 27 19:47:49 crc kubenswrapper[4981]: I0227 19:47:49.911626 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zzslc" podStartSLOduration=3.407930902 podStartE2EDuration="17.911605123s" podCreationTimestamp="2026-02-27 19:47:32 +0000 UTC" firstStartedPulling="2026-02-27 19:47:34.775168113 +0000 UTC m=+3754.253949273" lastFinishedPulling="2026-02-27 19:47:49.278842334 +0000 UTC m=+3768.757623494" observedRunningTime="2026-02-27 19:47:49.907929569 +0000 UTC m=+3769.386710749" watchObservedRunningTime="2026-02-27 19:47:49.911605123 +0000 UTC m=+3769.390386283" Feb 27 19:47:52 crc kubenswrapper[4981]: I0227 19:47:52.628521 4981 scope.go:117] "RemoveContainer" containerID="810a98630d243f775d15a5f7a0dbc4550e506380d2e30e6da6d41ffca9dc5d6d" Feb 27 19:47:52 crc kubenswrapper[4981]: E0227 19:47:52.629044 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:47:53 crc kubenswrapper[4981]: I0227 19:47:53.155401 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zzslc" Feb 27 19:47:53 crc kubenswrapper[4981]: I0227 19:47:53.155470 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zzslc" Feb 27 19:47:53 crc kubenswrapper[4981]: I0227 19:47:53.196799 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zzslc" Feb 27 19:47:53 crc kubenswrapper[4981]: E0227 19:47:53.630208 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:47:57 crc kubenswrapper[4981]: E0227 19:47:57.631121 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537026-spk6l" podUID="2019d976-62d6-4efd-b601-5d43bfd19a3c" Feb 27 19:47:59 crc kubenswrapper[4981]: E0227 19:47:59.631249 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:48:00 crc kubenswrapper[4981]: I0227 19:48:00.138658 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537028-2g7fs"] Feb 27 19:48:00 crc kubenswrapper[4981]: I0227 19:48:00.139612 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537028-2g7fs" Feb 27 19:48:00 crc kubenswrapper[4981]: I0227 19:48:00.150379 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537028-2g7fs"] Feb 27 19:48:00 crc kubenswrapper[4981]: I0227 19:48:00.280270 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqs54\" (UniqueName: \"kubernetes.io/projected/3907631f-159a-409c-8050-335a0445ae5d-kube-api-access-mqs54\") pod \"auto-csr-approver-29537028-2g7fs\" (UID: \"3907631f-159a-409c-8050-335a0445ae5d\") " pod="openshift-infra/auto-csr-approver-29537028-2g7fs" Feb 27 19:48:00 crc kubenswrapper[4981]: I0227 19:48:00.381888 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqs54\" (UniqueName: \"kubernetes.io/projected/3907631f-159a-409c-8050-335a0445ae5d-kube-api-access-mqs54\") pod \"auto-csr-approver-29537028-2g7fs\" (UID: \"3907631f-159a-409c-8050-335a0445ae5d\") " pod="openshift-infra/auto-csr-approver-29537028-2g7fs" Feb 27 19:48:00 crc kubenswrapper[4981]: I0227 19:48:00.407031 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqs54\" (UniqueName: \"kubernetes.io/projected/3907631f-159a-409c-8050-335a0445ae5d-kube-api-access-mqs54\") pod \"auto-csr-approver-29537028-2g7fs\" (UID: \"3907631f-159a-409c-8050-335a0445ae5d\") " pod="openshift-infra/auto-csr-approver-29537028-2g7fs" Feb 27 19:48:00 crc kubenswrapper[4981]: I0227 19:48:00.462795 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537028-2g7fs" Feb 27 19:48:00 crc kubenswrapper[4981]: I0227 19:48:00.852343 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537028-2g7fs"] Feb 27 19:48:00 crc kubenswrapper[4981]: I0227 19:48:00.982622 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537028-2g7fs" event={"ID":"3907631f-159a-409c-8050-335a0445ae5d","Type":"ContainerStarted","Data":"72b987dab3773807da5b84d1e3b76d7d70ecbddbb6433a28e72f4a881a40054d"} Feb 27 19:48:01 crc kubenswrapper[4981]: E0227 19:48:01.846657 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:48:01 crc kubenswrapper[4981]: E0227 19:48:01.846998 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:48:01 crc kubenswrapper[4981]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:48:01 crc kubenswrapper[4981]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mqs54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537028-2g7fs_openshift-infra(3907631f-159a-409c-8050-335a0445ae5d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:48:01 crc kubenswrapper[4981]: > logger="UnhandledError" Feb 27 19:48:01 crc kubenswrapper[4981]: E0227 19:48:01.848305 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537028-2g7fs" podUID="3907631f-159a-409c-8050-335a0445ae5d" Feb 27 19:48:01 crc kubenswrapper[4981]: E0227 19:48:01.990982 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537028-2g7fs" podUID="3907631f-159a-409c-8050-335a0445ae5d" Feb 27 19:48:02 crc kubenswrapper[4981]: E0227 19:48:02.630208 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:48:03 crc kubenswrapper[4981]: I0227 19:48:03.227928 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zzslc" Feb 27 19:48:03 crc kubenswrapper[4981]: I0227 19:48:03.282310 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zzslc"] Feb 27 19:48:04 crc kubenswrapper[4981]: I0227 19:48:04.005510 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zzslc" podUID="9b85f001-b726-4b01-9bfa-64731c653fd1" containerName="registry-server" containerID="cri-o://56a0dc92414fc0607b1207977663e9f4a408530bfce1b06578e42e01dff8a969" gracePeriod=2 Feb 27 19:48:04 crc kubenswrapper[4981]: I0227 19:48:04.379909 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzslc" Feb 27 19:48:04 crc kubenswrapper[4981]: I0227 19:48:04.541117 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b85f001-b726-4b01-9bfa-64731c653fd1-utilities\") pod \"9b85f001-b726-4b01-9bfa-64731c653fd1\" (UID: \"9b85f001-b726-4b01-9bfa-64731c653fd1\") " Feb 27 19:48:04 crc kubenswrapper[4981]: I0227 19:48:04.541218 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b85f001-b726-4b01-9bfa-64731c653fd1-catalog-content\") pod \"9b85f001-b726-4b01-9bfa-64731c653fd1\" (UID: \"9b85f001-b726-4b01-9bfa-64731c653fd1\") " Feb 27 19:48:04 crc kubenswrapper[4981]: I0227 19:48:04.541266 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm6fw\" (UniqueName: \"kubernetes.io/projected/9b85f001-b726-4b01-9bfa-64731c653fd1-kube-api-access-tm6fw\") pod \"9b85f001-b726-4b01-9bfa-64731c653fd1\" (UID: \"9b85f001-b726-4b01-9bfa-64731c653fd1\") " Feb 27 19:48:04 crc kubenswrapper[4981]: I0227 19:48:04.541971 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b85f001-b726-4b01-9bfa-64731c653fd1-utilities" (OuterVolumeSpecName: "utilities") pod "9b85f001-b726-4b01-9bfa-64731c653fd1" (UID: "9b85f001-b726-4b01-9bfa-64731c653fd1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:48:04 crc kubenswrapper[4981]: I0227 19:48:04.550758 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b85f001-b726-4b01-9bfa-64731c653fd1-kube-api-access-tm6fw" (OuterVolumeSpecName: "kube-api-access-tm6fw") pod "9b85f001-b726-4b01-9bfa-64731c653fd1" (UID: "9b85f001-b726-4b01-9bfa-64731c653fd1"). InnerVolumeSpecName "kube-api-access-tm6fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:48:04 crc kubenswrapper[4981]: I0227 19:48:04.629473 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b85f001-b726-4b01-9bfa-64731c653fd1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b85f001-b726-4b01-9bfa-64731c653fd1" (UID: "9b85f001-b726-4b01-9bfa-64731c653fd1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:48:04 crc kubenswrapper[4981]: I0227 19:48:04.643235 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b85f001-b726-4b01-9bfa-64731c653fd1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:48:04 crc kubenswrapper[4981]: I0227 19:48:04.643503 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm6fw\" (UniqueName: \"kubernetes.io/projected/9b85f001-b726-4b01-9bfa-64731c653fd1-kube-api-access-tm6fw\") on node \"crc\" DevicePath \"\"" Feb 27 19:48:04 crc kubenswrapper[4981]: I0227 19:48:04.643602 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b85f001-b726-4b01-9bfa-64731c653fd1-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:48:05 crc kubenswrapper[4981]: I0227 19:48:05.016182 4981 generic.go:334] "Generic (PLEG): container finished" podID="9b85f001-b726-4b01-9bfa-64731c653fd1" containerID="56a0dc92414fc0607b1207977663e9f4a408530bfce1b06578e42e01dff8a969" exitCode=0 Feb 27 19:48:05 crc kubenswrapper[4981]: I0227 19:48:05.016221 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzslc" event={"ID":"9b85f001-b726-4b01-9bfa-64731c653fd1","Type":"ContainerDied","Data":"56a0dc92414fc0607b1207977663e9f4a408530bfce1b06578e42e01dff8a969"} Feb 27 19:48:05 crc kubenswrapper[4981]: I0227 19:48:05.016317 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzslc" event={"ID":"9b85f001-b726-4b01-9bfa-64731c653fd1","Type":"ContainerDied","Data":"fcba374bb63c47e03729724c110be559db482cb9cc7f0e5fccc1ee3901cffab1"} Feb 27 19:48:05 crc kubenswrapper[4981]: I0227 19:48:05.016343 4981 scope.go:117] "RemoveContainer" containerID="56a0dc92414fc0607b1207977663e9f4a408530bfce1b06578e42e01dff8a969" Feb 27 19:48:05 crc kubenswrapper[4981]: I0227 19:48:05.016557 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzslc" Feb 27 19:48:05 crc kubenswrapper[4981]: I0227 19:48:05.050430 4981 scope.go:117] "RemoveContainer" containerID="1c0b4b90edf645f12b3909f6e720bc21e88d940525306682b050c344f976b8f5" Feb 27 19:48:05 crc kubenswrapper[4981]: I0227 19:48:05.051283 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zzslc"] Feb 27 19:48:05 crc kubenswrapper[4981]: I0227 19:48:05.057027 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zzslc"] Feb 27 19:48:05 crc kubenswrapper[4981]: I0227 19:48:05.069613 4981 scope.go:117] "RemoveContainer" containerID="9f5da6f64f2828965ee33067068b7bcbc31cc47a383d72af2e5f3d77ce5dcc56" Feb 27 19:48:05 crc kubenswrapper[4981]: I0227 19:48:05.092171 4981 scope.go:117] "RemoveContainer" containerID="56a0dc92414fc0607b1207977663e9f4a408530bfce1b06578e42e01dff8a969" Feb 27 19:48:05 crc kubenswrapper[4981]: E0227 19:48:05.092585 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56a0dc92414fc0607b1207977663e9f4a408530bfce1b06578e42e01dff8a969\": container with ID starting with 56a0dc92414fc0607b1207977663e9f4a408530bfce1b06578e42e01dff8a969 not found: ID does not exist" containerID="56a0dc92414fc0607b1207977663e9f4a408530bfce1b06578e42e01dff8a969" Feb 27 19:48:05 crc kubenswrapper[4981]: I0227 19:48:05.092630 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56a0dc92414fc0607b1207977663e9f4a408530bfce1b06578e42e01dff8a969"} err="failed to get container status \"56a0dc92414fc0607b1207977663e9f4a408530bfce1b06578e42e01dff8a969\": rpc error: code = NotFound desc = could not find container \"56a0dc92414fc0607b1207977663e9f4a408530bfce1b06578e42e01dff8a969\": container with ID starting with 56a0dc92414fc0607b1207977663e9f4a408530bfce1b06578e42e01dff8a969 not found: ID does not exist" Feb 27 19:48:05 crc kubenswrapper[4981]: I0227 19:48:05.092659 4981 scope.go:117] "RemoveContainer" containerID="1c0b4b90edf645f12b3909f6e720bc21e88d940525306682b050c344f976b8f5" Feb 27 19:48:05 crc kubenswrapper[4981]: E0227 19:48:05.093036 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c0b4b90edf645f12b3909f6e720bc21e88d940525306682b050c344f976b8f5\": container with ID starting with 1c0b4b90edf645f12b3909f6e720bc21e88d940525306682b050c344f976b8f5 not found: ID does not exist" containerID="1c0b4b90edf645f12b3909f6e720bc21e88d940525306682b050c344f976b8f5" Feb 27 19:48:05 crc kubenswrapper[4981]: I0227 19:48:05.093073 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c0b4b90edf645f12b3909f6e720bc21e88d940525306682b050c344f976b8f5"} err="failed to get container status \"1c0b4b90edf645f12b3909f6e720bc21e88d940525306682b050c344f976b8f5\": rpc error: code = NotFound desc = could not find container \"1c0b4b90edf645f12b3909f6e720bc21e88d940525306682b050c344f976b8f5\": container with ID starting with 1c0b4b90edf645f12b3909f6e720bc21e88d940525306682b050c344f976b8f5 not found: ID does not exist" Feb 27 19:48:05 crc kubenswrapper[4981]: I0227 19:48:05.093087 4981 scope.go:117] "RemoveContainer" containerID="9f5da6f64f2828965ee33067068b7bcbc31cc47a383d72af2e5f3d77ce5dcc56" Feb 27 19:48:05 crc kubenswrapper[4981]: E0227 19:48:05.093385 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f5da6f64f2828965ee33067068b7bcbc31cc47a383d72af2e5f3d77ce5dcc56\": container with ID starting with 9f5da6f64f2828965ee33067068b7bcbc31cc47a383d72af2e5f3d77ce5dcc56 not found: ID does not exist" containerID="9f5da6f64f2828965ee33067068b7bcbc31cc47a383d72af2e5f3d77ce5dcc56" Feb 27 19:48:05 crc kubenswrapper[4981]: I0227 19:48:05.093415 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f5da6f64f2828965ee33067068b7bcbc31cc47a383d72af2e5f3d77ce5dcc56"} err="failed to get container status \"9f5da6f64f2828965ee33067068b7bcbc31cc47a383d72af2e5f3d77ce5dcc56\": rpc error: code = NotFound desc = could not find container \"9f5da6f64f2828965ee33067068b7bcbc31cc47a383d72af2e5f3d77ce5dcc56\": container with ID starting with 9f5da6f64f2828965ee33067068b7bcbc31cc47a383d72af2e5f3d77ce5dcc56 not found: ID does not exist" Feb 27 19:48:05 crc kubenswrapper[4981]: I0227 19:48:05.640552 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b85f001-b726-4b01-9bfa-64731c653fd1" path="/var/lib/kubelet/pods/9b85f001-b726-4b01-9bfa-64731c653fd1/volumes" Feb 27 19:48:06 crc kubenswrapper[4981]: I0227 19:48:06.628857 4981 scope.go:117] "RemoveContainer" containerID="810a98630d243f775d15a5f7a0dbc4550e506380d2e30e6da6d41ffca9dc5d6d" Feb 27 19:48:06 crc kubenswrapper[4981]: E0227 19:48:06.629085 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:48:07 crc kubenswrapper[4981]: E0227 19:48:07.630181 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:48:11 crc kubenswrapper[4981]: E0227 19:48:11.641629 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s7jgl" podUID="6c006c9c-d6e0-46b9-af87-487c821d5593" Feb 27 19:48:12 crc kubenswrapper[4981]: E0227 19:48:12.630394 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537026-spk6l" podUID="2019d976-62d6-4efd-b601-5d43bfd19a3c" Feb 27 19:48:13 crc kubenswrapper[4981]: E0227 19:48:13.630744 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:48:16 crc kubenswrapper[4981]: I0227 19:48:16.631684 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:48:17 crc kubenswrapper[4981]: E0227 19:48:17.759160 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:48:17 crc kubenswrapper[4981]: E0227 19:48:17.759532 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:48:17 crc kubenswrapper[4981]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:48:17 crc kubenswrapper[4981]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mqs54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537028-2g7fs_openshift-infra(3907631f-159a-409c-8050-335a0445ae5d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:48:17 crc kubenswrapper[4981]: > logger="UnhandledError" Feb 27 19:48:17 crc kubenswrapper[4981]: E0227 19:48:17.760676 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537028-2g7fs" podUID="3907631f-159a-409c-8050-335a0445ae5d" Feb 27 19:48:19 crc kubenswrapper[4981]: I0227 19:48:19.629396 4981 scope.go:117] "RemoveContainer" containerID="810a98630d243f775d15a5f7a0dbc4550e506380d2e30e6da6d41ffca9dc5d6d" Feb 27 19:48:19 crc kubenswrapper[4981]: E0227 19:48:19.629693 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5pm8g_openshift-machine-config-operator(1fefdc04-8285-4630-83d3-494dcc0216f6)\"" pod="openshift-machine-config-operator/machine-config-daemon-5pm8g" podUID="1fefdc04-8285-4630-83d3-494dcc0216f6" Feb 27 19:48:19 crc kubenswrapper[4981]: E0227 19:48:19.782953 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:48:19 crc kubenswrapper[4981]: E0227 19:48:19.783205 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:48:19 crc kubenswrapper[4981]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:48:19 crc kubenswrapper[4981]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qphf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537022-gk6n2_openshift-infra(e853497d-5551-44e1-82d2-9915151f5e46): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:48:19 crc kubenswrapper[4981]: > logger="UnhandledError" Feb 27 19:48:19 crc kubenswrapper[4981]: E0227 19:48:19.784968 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537022-gk6n2" podUID="e853497d-5551-44e1-82d2-9915151f5e46" Feb 27 19:48:25 crc kubenswrapper[4981]: E0227 19:48:25.322551 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 19:48:25 crc kubenswrapper[4981]: E0227 19:48:25.323239 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrpvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-f9ltj_openshift-marketplace(64c273de-1f65-4ec7-b2a0-c070e4d29ce6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:48:25 crc kubenswrapper[4981]: E0227 19:48:25.324483 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-f9ltj" podUID="64c273de-1f65-4ec7-b2a0-c070e4d29ce6" Feb 27 19:48:25 crc kubenswrapper[4981]: E0227 19:48:25.629967 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537026-spk6l" podUID="2019d976-62d6-4efd-b601-5d43bfd19a3c" Feb 27 19:48:27 crc kubenswrapper[4981]: I0227 19:48:27.363991 4981 generic.go:334] "Generic (PLEG): container finished" podID="6c006c9c-d6e0-46b9-af87-487c821d5593" containerID="c515cefde4c301996f5ffa4dba3736ead9a0982f66a8c078b8bb613f1f13416d" exitCode=0 Feb 27 19:48:27 crc kubenswrapper[4981]: I0227 19:48:27.364360 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s7jgl" event={"ID":"6c006c9c-d6e0-46b9-af87-487c821d5593","Type":"ContainerDied","Data":"c515cefde4c301996f5ffa4dba3736ead9a0982f66a8c078b8bb613f1f13416d"}